00:00:00.000  Started by upstream project "autotest-per-patch" build number 132755
00:00:00.000  originally caused by:
00:00:00.000   Started by user sys_sgci
00:00:00.024  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.025  The recommended git tool is: git
00:00:00.025  using credential 00000000-0000-0000-0000-000000000002
00:00:00.028   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.041  Fetching changes from the remote Git repository
00:00:00.044   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.060  Using shallow fetch with depth 1
00:00:00.060  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.060   > git --version # timeout=10
00:00:00.075   > git --version # 'git version 2.39.2'
00:00:00.075  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.094  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.094   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:04.365   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:04.378   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:04.390  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:04.390   > git config core.sparsecheckout # timeout=10
00:00:04.400   > git read-tree -mu HEAD # timeout=10
00:00:04.416   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:04.442  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:04.442   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:04.522  [Pipeline] Start of Pipeline
00:00:04.532  [Pipeline] library
00:00:04.533  Loading library shm_lib@master
00:00:04.533  Library shm_lib@master is cached. Copying from home.
00:00:04.547  [Pipeline] node
00:00:04.559  Running on GP6 in /var/jenkins/workspace/vfio-user-phy-autotest
00:00:04.560  [Pipeline] {
00:00:04.568  [Pipeline] catchError
00:00:04.569  [Pipeline] {
00:00:04.578  [Pipeline] wrap
00:00:04.584  [Pipeline] {
00:00:04.589  [Pipeline] stage
00:00:04.591  [Pipeline] { (Prologue)
00:00:04.855  [Pipeline] sh
00:00:05.139  + logger -p user.info -t JENKINS-CI
00:00:05.166  [Pipeline] echo
00:00:05.167  Node: GP6
00:00:05.175  [Pipeline] sh
00:00:05.476  [Pipeline] setCustomBuildProperty
00:00:05.487  [Pipeline] echo
00:00:05.488  Cleanup processes
00:00:05.494  [Pipeline] sh
00:00:05.778  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:05.779  425711 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:05.792  [Pipeline] sh
00:00:06.079  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:06.079  ++ grep -v 'sudo pgrep'
00:00:06.079  ++ awk '{print $1}'
00:00:06.079  + sudo kill -9
00:00:06.079  + true
00:00:06.093  [Pipeline] cleanWs
00:00:06.102  [WS-CLEANUP] Deleting project workspace...
00:00:06.102  [WS-CLEANUP] Deferred wipeout is used...
00:00:06.109  [WS-CLEANUP] done
00:00:06.114  [Pipeline] setCustomBuildProperty
00:00:06.125  [Pipeline] sh
00:00:06.405  + sudo git config --global --replace-all safe.directory '*'
00:00:06.495  [Pipeline] httpRequest
00:00:06.970  [Pipeline] echo
00:00:06.972  Sorcerer 10.211.164.101 is alive
00:00:06.981  [Pipeline] retry
00:00:06.983  [Pipeline] {
00:00:06.997  [Pipeline] httpRequest
00:00:07.002  HttpMethod: GET
00:00:07.002  URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:07.003  Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:07.025  Response Code: HTTP/1.1 200 OK
00:00:07.025  Success: Status code 200 is in the accepted range: 200,404
00:00:07.026  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:25.163  [Pipeline] }
00:00:25.178  [Pipeline] // retry
00:00:25.186  [Pipeline] sh
00:00:25.475  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:25.491  [Pipeline] httpRequest
00:00:25.871  [Pipeline] echo
00:00:25.872  Sorcerer 10.211.164.101 is alive
00:00:25.881  [Pipeline] retry
00:00:25.883  [Pipeline] {
00:00:25.913  [Pipeline] httpRequest
00:00:25.918  HttpMethod: GET
00:00:25.918  URL: http://10.211.164.101/packages/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz
00:00:25.919  Sending request to url: http://10.211.164.101/packages/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz
00:00:25.927  Response Code: HTTP/1.1 200 OK
00:00:25.927  Success: Status code 200 is in the accepted range: 200,404
00:00:25.928  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz
00:03:47.536  [Pipeline] }
00:03:47.555  [Pipeline] // retry
00:03:47.563  [Pipeline] sh
00:03:47.852  + tar --no-same-owner -xf spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz
00:03:51.150  [Pipeline] sh
00:03:51.452  + git -C spdk log --oneline -n5
00:03:51.452  b6a18b192 nvme/rdma: Don't limit max_sge if UMR is used
00:03:51.452  1148849d6 nvme/rdma: Register UMR per IO request
00:03:51.452  0787c2b4e accel/mlx5: Support mkey registration
00:03:51.452  0ea9ac02f accel/mlx5: Create pool of UMRs
00:03:51.452  60adca7e1 lib/mlx5: API to configure UMR
00:03:51.464  [Pipeline] }
00:03:51.479  [Pipeline] // stage
00:03:51.490  [Pipeline] stage
00:03:51.492  [Pipeline] { (Prepare)
00:03:51.513  [Pipeline] writeFile
00:03:51.527  [Pipeline] sh
00:03:51.812  + logger -p user.info -t JENKINS-CI
00:03:51.825  [Pipeline] sh
00:03:52.112  + logger -p user.info -t JENKINS-CI
00:03:52.124  [Pipeline] sh
00:03:52.412  + cat autorun-spdk.conf
00:03:52.412  SPDK_RUN_FUNCTIONAL_TEST=1
00:03:52.412  SPDK_TEST_VFIOUSER_QEMU=1
00:03:52.412  SPDK_RUN_ASAN=1
00:03:52.412  SPDK_RUN_UBSAN=1
00:03:52.412  SPDK_TEST_SMA=1
00:03:52.421  RUN_NIGHTLY=0
00:03:52.427  [Pipeline] readFile
00:03:52.451  [Pipeline] copyArtifacts
00:03:55.336  Copied 1 artifact from "qemu-vfio" build number 34
00:03:55.341  [Pipeline] sh
00:03:55.630  + tar xf qemu-vfio.tar.gz
00:03:58.208  [Pipeline] copyArtifacts
00:03:58.229  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:03:58.234  [Pipeline] sh
00:03:58.521  + sudo mkdir -p /var/spdk/dependencies/vhost
00:03:58.534  [Pipeline] sh
00:03:58.847  + cd /var/spdk/dependencies/vhost
00:03:58.847  + md5sum --quiet -c /var/jenkins/workspace/vfio-user-phy-autotest/spdk_test_image.qcow2.gz.md5
00:03:58.847  md5sum: spdk_test_image.qcow2.gz: No such file or directory
00:03:58.847  spdk_test_image.qcow2.gz: FAILED open or read
00:03:58.847  md5sum: WARNING: 1 listed file could not be read
00:03:58.864  [Pipeline] copyArtifacts
00:05:14.989  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:05:14.992  [Pipeline] sh
00:05:15.281  + sudo mv spdk_test_image.qcow2.gz /var/spdk/dependencies/vhost
00:05:15.293  [Pipeline] sh
00:05:15.578  + sudo rm -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:05:15.595  [Pipeline] withEnv
00:05:15.598  [Pipeline] {
00:05:15.612  [Pipeline] sh
00:05:15.899  + set -ex
00:05:15.899  + [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf ]]
00:05:15.899  + source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:05:15.899  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:05:15.899  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:05:15.899  ++ SPDK_RUN_ASAN=1
00:05:15.899  ++ SPDK_RUN_UBSAN=1
00:05:15.899  ++ SPDK_TEST_SMA=1
00:05:15.899  ++ RUN_NIGHTLY=0
00:05:15.899  + case $SPDK_TEST_NVMF_NICS in
00:05:15.899  + DRIVERS=
00:05:15.899  + [[ -n '' ]]
00:05:15.899  + exit 0
00:05:15.909  [Pipeline] }
00:05:15.924  [Pipeline] // withEnv
00:05:15.929  [Pipeline] }
00:05:15.942  [Pipeline] // stage
00:05:15.952  [Pipeline] catchError
00:05:15.954  [Pipeline] {
00:05:15.967  [Pipeline] timeout
00:05:15.967  Timeout set to expire in 35 min
00:05:15.969  [Pipeline] {
00:05:15.982  [Pipeline] stage
00:05:15.984  [Pipeline] { (Tests)
00:05:15.997  [Pipeline] sh
00:05:16.282  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/vfio-user-phy-autotest
00:05:16.282  ++ readlink -f /var/jenkins/workspace/vfio-user-phy-autotest
00:05:16.282  + DIR_ROOT=/var/jenkins/workspace/vfio-user-phy-autotest
00:05:16.282  + [[ -n /var/jenkins/workspace/vfio-user-phy-autotest ]]
00:05:16.282  + DIR_SPDK=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:05:16.282  + DIR_OUTPUT=/var/jenkins/workspace/vfio-user-phy-autotest/output
00:05:16.282  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk ]]
00:05:16.282  + [[ ! -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:05:16.282  + mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/output
00:05:16.282  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:05:16.282  + [[ vfio-user-phy-autotest == pkgdep-* ]]
00:05:16.282  + cd /var/jenkins/workspace/vfio-user-phy-autotest
00:05:16.282  + source /etc/os-release
00:05:16.282  ++ NAME='Fedora Linux'
00:05:16.282  ++ VERSION='39 (Cloud Edition)'
00:05:16.282  ++ ID=fedora
00:05:16.282  ++ VERSION_ID=39
00:05:16.282  ++ VERSION_CODENAME=
00:05:16.282  ++ PLATFORM_ID=platform:f39
00:05:16.282  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:05:16.282  ++ ANSI_COLOR='0;38;2;60;110;180'
00:05:16.282  ++ LOGO=fedora-logo-icon
00:05:16.282  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:05:16.282  ++ HOME_URL=https://fedoraproject.org/
00:05:16.282  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:05:16.282  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:05:16.282  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:05:16.282  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:05:16.282  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:05:16.282  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:05:16.282  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:05:16.282  ++ SUPPORT_END=2024-11-12
00:05:16.282  ++ VARIANT='Cloud Edition'
00:05:16.282  ++ VARIANT_ID=cloud
00:05:16.282  + uname -a
00:05:16.282  Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:05:16.282  + sudo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:05:17.217  Hugepages
00:05:17.217  node     hugesize     free /  total
00:05:17.217  node0   1048576kB        0 /      0
00:05:17.217  node0      2048kB        0 /      0
00:05:17.217  node1   1048576kB        0 /      0
00:05:17.217  node1      2048kB        0 /      0
00:05:17.217  
00:05:17.217  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:05:17.217  I/OAT                     0000:00:04.0    8086   0e20   0       ioatdma          -          -
00:05:17.217  I/OAT                     0000:00:04.1    8086   0e21   0       ioatdma          -          -
00:05:17.217  I/OAT                     0000:00:04.2    8086   0e22   0       ioatdma          -          -
00:05:17.217  I/OAT                     0000:00:04.3    8086   0e23   0       ioatdma          -          -
00:05:17.217  I/OAT                     0000:00:04.4    8086   0e24   0       ioatdma          -          -
00:05:17.217  I/OAT                     0000:00:04.5    8086   0e25   0       ioatdma          -          -
00:05:17.217  I/OAT                     0000:00:04.6    8086   0e26   0       ioatdma          -          -
00:05:17.476  I/OAT                     0000:00:04.7    8086   0e27   0       ioatdma          -          -
00:05:17.476  NVMe                      0000:0b:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:05:17.476  I/OAT                     0000:80:04.0    8086   0e20   1       ioatdma          -          -
00:05:17.476  I/OAT                     0000:80:04.1    8086   0e21   1       ioatdma          -          -
00:05:17.476  I/OAT                     0000:80:04.2    8086   0e22   1       ioatdma          -          -
00:05:17.476  I/OAT                     0000:80:04.3    8086   0e23   1       ioatdma          -          -
00:05:17.476  I/OAT                     0000:80:04.4    8086   0e24   1       ioatdma          -          -
00:05:17.476  I/OAT                     0000:80:04.5    8086   0e25   1       ioatdma          -          -
00:05:17.476  I/OAT                     0000:80:04.6    8086   0e26   1       ioatdma          -          -
00:05:17.476  I/OAT                     0000:80:04.7    8086   0e27   1       ioatdma          -          -
00:05:17.476  + rm -f /tmp/spdk-ld-path
00:05:17.476  + source autorun-spdk.conf
00:05:17.476  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:05:17.476  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:05:17.476  ++ SPDK_RUN_ASAN=1
00:05:17.476  ++ SPDK_RUN_UBSAN=1
00:05:17.476  ++ SPDK_TEST_SMA=1
00:05:17.476  ++ RUN_NIGHTLY=0
00:05:17.476  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:05:17.476  + [[ -n '' ]]
00:05:17.476  + sudo git config --global --add safe.directory /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:05:17.476  + for M in /var/spdk/build-*-manifest.txt
00:05:17.476  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:05:17.476  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:05:17.476  + for M in /var/spdk/build-*-manifest.txt
00:05:17.476  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:05:17.477  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:05:17.477  + for M in /var/spdk/build-*-manifest.txt
00:05:17.477  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:05:17.477  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:05:17.477  ++ uname
00:05:17.477  + [[ Linux == \L\i\n\u\x ]]
00:05:17.477  + sudo dmesg -T
00:05:17.477  + sudo dmesg --clear
00:05:17.477  + dmesg_pid=427707
00:05:17.477  + [[ Fedora Linux == FreeBSD ]]
00:05:17.477  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:05:17.477  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:05:17.477  + sudo dmesg -Tw
00:05:17.477  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:05:17.477  + [[ -x /usr/src/fio-static/fio ]]
00:05:17.477  + export FIO_BIN=/usr/src/fio-static/fio
00:05:17.477  + FIO_BIN=/usr/src/fio-static/fio
00:05:17.477  + [[ /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64 == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\v\f\i\o\-\u\s\e\r\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:05:17.477  ++ ldd /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64
00:05:17.477  + deps='	linux-vdso.so.1 (0x00007ffc221a6000)
00:05:17.477  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007ff29a63f000)
00:05:17.477  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007ff29a625000)
00:05:17.477  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007ff29a5ee000)
00:05:17.477  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007ff29a595000)
00:05:17.477  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007ff29a588000)
00:05:17.477  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007ff29a579000)
00:05:17.477  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007ff29a39f000)
00:05:17.477  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007ff29a33f000)
00:05:17.477  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007ff29a1f5000)
00:05:17.477  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007ff29a1d9000)
00:05:17.477  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007ff29a1b7000)
00:05:17.477  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007ff29a195000)
00:05:17.477  	libbpf.so.0 => not found
00:05:17.477  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007ff29a154000)
00:05:17.477  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007ff29a11f000)
00:05:17.477  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007ff29a118000)
00:05:17.477  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007ff29a110000)
00:05:17.477  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007ff29a0ce000)
00:05:17.477  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007ff29a09e000)
00:05:17.477  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007ff29a099000)
00:05:17.477  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007ff2997de000)
00:05:17.477  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007ff299616000)
00:05:17.477  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007ff299535000)
00:05:17.477  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007ff299510000)
00:05:17.477  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007ff29932c000)
00:05:17.477  	/lib64/ld-linux-x86-64.so.2 (0x00007ff29b7a3000)
00:05:17.477  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007ff299322000)
00:05:17.477  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007ff2992f5000)
00:05:17.477  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007ff2992eb000)
00:05:17.477  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007ff2992cf000)
00:05:17.477  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007ff29927c000)
00:05:17.477  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007ff29924f000)
00:05:17.477  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007ff29923f000)
00:05:17.477  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007ff2991a4000)
00:05:17.477  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007ff29917f000)
00:05:17.477  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007ff2990e7000)
00:05:17.477  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007ff298fad000)
00:05:17.477  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007ff298f0a000)
00:05:17.477  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007ff298e89000)
00:05:17.477  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007ff298259000)
00:05:17.477  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007ff297d80000)
00:05:17.477  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007ff297b2a000)
00:05:17.477  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007ff297a6b000)
00:05:17.477  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007ff297a38000)
00:05:17.477  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007ff2979fc000)
00:05:17.477  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007ff2979d6000)
00:05:17.477  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007ff297977000)
00:05:17.477  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007ff29796f000)
00:05:17.477  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007ff29795b000)
00:05:17.477  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007ff29794a000)
00:05:17.477  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007ff297896000)
00:05:17.477  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007ff2977fc000)
00:05:17.477  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007ff2977cf000)
00:05:17.477  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007ff2977ad000)
00:05:17.477  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007ff29773a000)
00:05:17.477  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007ff297726000)
00:05:17.477  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007ff2976d0000)
00:05:17.477  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007ff297669000)
00:05:17.477  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007ff297657000)
00:05:17.477  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007ff297649000)
00:05:17.477  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007ff297499000)
00:05:17.477  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007ff2973c0000)
00:05:17.477  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007ff2973a6000)
00:05:17.477  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007ff29739f000)
00:05:17.477  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007ff29738f000)
00:05:17.477  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007ff297388000)
00:05:17.477  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007ff297330000)
00:05:17.477  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007ff297311000)
00:05:17.477  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007ff2972ec000)
00:05:17.477  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007ff2972b3000)'
00:05:17.477  + [[ 	linux-vdso.so.1 (0x00007ffc221a6000)
00:05:17.477  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007ff29a63f000)
00:05:17.477  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007ff29a625000)
00:05:17.477  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007ff29a5ee000)
00:05:17.477  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007ff29a595000)
00:05:17.477  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007ff29a588000)
00:05:17.477  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007ff29a579000)
00:05:17.477  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007ff29a39f000)
00:05:17.477  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007ff29a33f000)
00:05:17.477  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007ff29a1f5000)
00:05:17.477  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007ff29a1d9000)
00:05:17.477  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007ff29a1b7000)
00:05:17.477  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007ff29a195000)
00:05:17.477  	libbpf.so.0 => not found
00:05:17.477  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007ff29a154000)
00:05:17.477  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007ff29a11f000)
00:05:17.477  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007ff29a118000)
00:05:17.477  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007ff29a110000)
00:05:17.477  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007ff29a0ce000)
00:05:17.477  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007ff29a09e000)
00:05:17.477  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007ff29a099000)
00:05:17.477  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007ff2997de000)
00:05:17.477  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007ff299616000)
00:05:17.477  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007ff299535000)
00:05:17.477  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007ff299510000)
00:05:17.477  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007ff29932c000)
00:05:17.477  	/lib64/ld-linux-x86-64.so.2 (0x00007ff29b7a3000)
00:05:17.477  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007ff299322000)
00:05:17.477  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007ff2992f5000)
00:05:17.477  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007ff2992eb000)
00:05:17.477  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007ff2992cf000)
00:05:17.478  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007ff29927c000)
00:05:17.478  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007ff29924f000)
00:05:17.478  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007ff29923f000)
00:05:17.478  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007ff2991a4000)
00:05:17.478  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007ff29917f000)
00:05:17.478  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007ff2990e7000)
00:05:17.478  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007ff298fad000)
00:05:17.478  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007ff298f0a000)
00:05:17.478  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007ff298e89000)
00:05:17.478  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007ff298259000)
00:05:17.478  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007ff297d80000)
00:05:17.478  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007ff297b2a000)
00:05:17.478  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007ff297a6b000)
00:05:17.478  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007ff297a38000)
00:05:17.478  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007ff2979fc000)
00:05:17.478  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007ff2979d6000)
00:05:17.478  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007ff297977000)
00:05:17.478  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007ff29796f000)
00:05:17.478  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007ff29795b000)
00:05:17.478  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007ff29794a000)
00:05:17.478  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007ff297896000)
00:05:17.478  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007ff2977fc000)
00:05:17.478  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007ff2977cf000)
00:05:17.478  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007ff2977ad000)
00:05:17.478  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007ff29773a000)
00:05:17.478  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007ff297726000)
00:05:17.478  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007ff2976d0000)
00:05:17.478  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007ff297669000)
00:05:17.478  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007ff297657000)
00:05:17.478  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007ff297649000)
00:05:17.478  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007ff297499000)
00:05:17.478  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007ff2973c0000)
00:05:17.478  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007ff2973a6000)
00:05:17.478  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007ff29739f000)
00:05:17.478  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007ff29738f000)
00:05:17.478  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007ff297388000)
00:05:17.478  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007ff297330000)
00:05:17.478  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007ff297311000)
00:05:17.478  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007ff2972ec000)
00:05:17.478  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007ff2972b3000) == *\n\o\t\ \f\o\u\n\d* ]]
00:05:17.478  + unset -v VFIO_QEMU_BIN
00:05:17.478  + [[ ! -v VFIO_QEMU_BIN ]]
00:05:17.478  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:05:17.478  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:17.478  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:17.478  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:05:17.478  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:05:17.478  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:05:17.478  + spdk/autorun.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:05:17.478    19:03:48  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:05:17.478   19:03:48  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:05:17.478    19:03:48  -- vfio-user-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:05:17.478    19:03:48  -- vfio-user-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_VFIOUSER_QEMU=1
00:05:17.478    19:03:48  -- vfio-user-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_RUN_ASAN=1
00:05:17.478    19:03:48  -- vfio-user-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_RUN_UBSAN=1
00:05:17.478    19:03:48  -- vfio-user-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_SMA=1
00:05:17.478    19:03:48  -- vfio-user-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0
00:05:17.478   19:03:48  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:05:17.478   19:03:48  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:05:17.736     19:03:48  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:05:17.736    19:03:48  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:05:17.736     19:03:48  -- scripts/common.sh@15 -- $ shopt -s extglob
00:05:17.736     19:03:48  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:05:17.736     19:03:48  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:17.736     19:03:48  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:17.736      19:03:48  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:17.736      19:03:48  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:17.736      19:03:48  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:17.736      19:03:48  -- paths/export.sh@5 -- $ export PATH
00:05:17.736      19:03:48  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:17.736    19:03:48  -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:05:17.736      19:03:48  -- common/autobuild_common.sh@493 -- $ date +%s
00:05:17.736     19:03:48  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733508228.XXXXXX
00:05:17.736    19:03:48  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733508228.BwcZrn
00:05:17.736    19:03:48  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:05:17.736    19:03:48  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:05:17.736    19:03:48  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/'
00:05:17.736    19:03:48  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp'
00:05:17.736    19:03:48  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:05:17.736     19:03:48  -- common/autobuild_common.sh@509 -- $ get_config_params
00:05:17.736     19:03:48  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:05:17.736     19:03:48  -- common/autotest_common.sh@10 -- $ set +x
00:05:17.736    19:03:48  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto'
00:05:17.736    19:03:48  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:05:17.736    19:03:48  -- pm/common@17 -- $ local monitor
00:05:17.736    19:03:48  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:17.736    19:03:48  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:17.736    19:03:48  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:17.736     19:03:48  -- pm/common@21 -- $ date +%s
00:05:17.736    19:03:48  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:17.736     19:03:48  -- pm/common@21 -- $ date +%s
00:05:17.736    19:03:48  -- pm/common@25 -- $ sleep 1
00:05:17.736     19:03:48  -- pm/common@21 -- $ date +%s
00:05:17.736     19:03:48  -- pm/common@21 -- $ date +%s
00:05:17.736    19:03:48  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508228
00:05:17.736    19:03:48  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508228
00:05:17.736    19:03:48  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508228
00:05:17.736    19:03:48  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508228
00:05:17.736  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508228_collect-cpu-load.pm.log
00:05:17.736  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508228_collect-vmstat.pm.log
00:05:17.736  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508228_collect-cpu-temp.pm.log
00:05:17.736  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508228_collect-bmc-pm.bmc.pm.log
00:05:18.672    19:03:49  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:05:18.672   19:03:49  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:05:18.672   19:03:49  -- spdk/autobuild.sh@12 -- $ umask 022
00:05:18.672   19:03:49  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:05:18.672   19:03:49  -- spdk/autobuild.sh@16 -- $ date -u
00:05:18.672  Fri Dec  6 06:03:49 PM UTC 2024
00:05:18.672   19:03:49  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:05:18.672  v25.01-pre-311-gb6a18b192
00:05:18.672   19:03:49  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:05:18.672   19:03:49  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:05:18.672   19:03:49  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:05:18.672   19:03:49  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:05:18.672   19:03:49  -- common/autotest_common.sh@10 -- $ set +x
00:05:18.672  ************************************
00:05:18.672  START TEST asan
00:05:18.672  ************************************
00:05:18.672   19:03:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:05:18.672  using asan
00:05:18.672  
00:05:18.672  real	0m0.000s
00:05:18.672  user	0m0.000s
00:05:18.672  sys	0m0.000s
00:05:18.672   19:03:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:18.672   19:03:49 asan -- common/autotest_common.sh@10 -- $ set +x
00:05:18.672  ************************************
00:05:18.672  END TEST asan
00:05:18.672  ************************************
00:05:18.672   19:03:49  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:05:18.672   19:03:49  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:05:18.672   19:03:49  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:05:18.672   19:03:49  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:05:18.672   19:03:49  -- common/autotest_common.sh@10 -- $ set +x
00:05:18.672  ************************************
00:05:18.672  START TEST ubsan
00:05:18.672  ************************************
00:05:18.672   19:03:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:05:18.672  using ubsan
00:05:18.672  
00:05:18.672  real	0m0.000s
00:05:18.672  user	0m0.000s
00:05:18.672  sys	0m0.000s
00:05:18.672   19:03:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:18.672   19:03:49 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:05:18.672  ************************************
00:05:18.672  END TEST ubsan
00:05:18.672  ************************************
00:05:18.672   19:03:49  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:05:18.672   19:03:49  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:05:18.672   19:03:49  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:05:18.672   19:03:49  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:05:18.672   19:03:49  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:05:18.672   19:03:49  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:05:18.672   19:03:49  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:05:18.672   19:03:49  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:05:18.672   19:03:49  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto --with-shared
00:05:18.928  Using default SPDK env in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk
00:05:18.929  Using default DPDK in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:05:19.186  Using 'verbs' RDMA provider
00:05:30.100  Configuring ISA-L (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal.log)...done.
00:05:40.093  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:05:40.351  Creating mk/config.mk...done.
00:05:40.351  Creating mk/cc.flags.mk...done.
00:05:40.351  Type 'make' to build.
00:05:40.351   19:04:11  -- spdk/autobuild.sh@70 -- $ run_test make make -j48
00:05:40.351   19:04:11  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:05:40.351   19:04:11  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:05:40.351   19:04:11  -- common/autotest_common.sh@10 -- $ set +x
00:05:40.609  ************************************
00:05:40.609  START TEST make
00:05:40.609  ************************************
00:05:40.609   19:04:11 make -- common/autotest_common.sh@1129 -- $ make -j48
00:05:40.871  make[1]: Nothing to be done for 'all'.
00:05:42.782  The Meson build system
00:05:42.782  Version: 1.5.0
00:05:42.782  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user
00:05:42.782  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:05:42.782  Build type: native build
00:05:42.782  Project name: libvfio-user
00:05:42.782  Project version: 0.0.1
00:05:42.782  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:05:42.782  C linker for the host machine: cc ld.bfd 2.40-14
00:05:42.782  Host machine cpu family: x86_64
00:05:42.782  Host machine cpu: x86_64
00:05:42.782  Run-time dependency threads found: YES
00:05:42.782  Library dl found: YES
00:05:42.782  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:05:42.782  Run-time dependency json-c found: YES 0.17
00:05:42.782  Run-time dependency cmocka found: YES 1.1.7
00:05:42.782  Program pytest-3 found: NO
00:05:42.782  Program flake8 found: NO
00:05:42.782  Program misspell-fixer found: NO
00:05:42.782  Program restructuredtext-lint found: NO
00:05:42.782  Program valgrind found: YES (/usr/bin/valgrind)
00:05:42.782  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:05:42.782  Compiler for C supports arguments -Wmissing-declarations: YES 
00:05:42.782  Compiler for C supports arguments -Wwrite-strings: YES 
00:05:42.782  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:05:42.782  Program test-lspci.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-lspci.sh)
00:05:42.782  Program test-linkage.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-linkage.sh)
00:05:42.782  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:05:42.782  Build targets in project: 8
00:05:42.782  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:05:42.782   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:05:42.782  
00:05:42.782  libvfio-user 0.0.1
00:05:42.782  
00:05:42.782    User defined options
00:05:42.782      buildtype      : debug
00:05:42.782      default_library: shared
00:05:42.782      libdir         : /usr/local/lib
00:05:42.782  
00:05:42.782  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:05:43.367  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:05:43.631  [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:05:43.631  [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:05:43.631  [3/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:05:43.631  [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:05:43.631  [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:05:43.631  [6/37] Compiling C object samples/lspci.p/lspci.c.o
00:05:43.631  [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:05:43.631  [8/37] Compiling C object samples/null.p/null.c.o
00:05:43.631  [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:05:43.631  [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:05:43.631  [11/37] Compiling C object samples/server.p/server.c.o
00:05:43.631  [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:05:43.631  [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:05:43.631  [14/37] Compiling C object test/unit_tests.p/mocks.c.o
00:05:43.631  [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:05:43.631  [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:05:43.890  [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:05:43.890  [18/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:05:43.890  [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:05:43.890  [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:05:43.890  [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:05:43.890  [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:05:43.890  [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:05:43.890  [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:05:43.890  [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:05:43.890  [26/37] Compiling C object samples/client.p/client.c.o
00:05:43.890  [27/37] Linking target samples/client
00:05:43.890  [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:05:43.890  [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:05:44.171  [30/37] Linking target lib/libvfio-user.so.0.0.1
00:05:44.171  [31/37] Linking target test/unit_tests
00:05:44.171  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:05:44.171  [33/37] Linking target samples/lspci
00:05:44.171  [34/37] Linking target samples/gpio-pci-idio-16
00:05:44.171  [35/37] Linking target samples/null
00:05:44.433  [36/37] Linking target samples/server
00:05:44.433  [37/37] Linking target samples/shadow_ioeventfd_server
00:05:44.433  INFO: autodetecting backend as ninja
00:05:44.433  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:05:44.433  DESTDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:05:45.380  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:05:45.380  ninja: no work to do.
00:06:24.115  The Meson build system
00:06:24.115  Version: 1.5.0
00:06:24.115  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk
00:06:24.115  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp
00:06:24.115  Build type: native build
00:06:24.115  Program cat found: YES (/usr/bin/cat)
00:06:24.115  Project name: DPDK
00:06:24.115  Project version: 24.03.0
00:06:24.115  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:06:24.115  C linker for the host machine: cc ld.bfd 2.40-14
00:06:24.115  Host machine cpu family: x86_64
00:06:24.115  Host machine cpu: x86_64
00:06:24.115  Message: ## Building in Developer Mode ##
00:06:24.115  Program pkg-config found: YES (/usr/bin/pkg-config)
00:06:24.115  Program check-symbols.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:06:24.115  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:06:24.115  Program python3 found: YES (/usr/bin/python3)
00:06:24.115  Program cat found: YES (/usr/bin/cat)
00:06:24.115  Compiler for C supports arguments -march=native: YES 
00:06:24.115  Checking for size of "void *" : 8 
00:06:24.115  Checking for size of "void *" : 8 (cached)
00:06:24.115  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:06:24.115  Library m found: YES
00:06:24.115  Library numa found: YES
00:06:24.115  Has header "numaif.h" : YES 
00:06:24.115  Library fdt found: NO
00:06:24.115  Library execinfo found: NO
00:06:24.115  Has header "execinfo.h" : YES 
00:06:24.115  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:06:24.115  Run-time dependency libarchive found: NO (tried pkgconfig)
00:06:24.115  Run-time dependency libbsd found: NO (tried pkgconfig)
00:06:24.115  Run-time dependency jansson found: NO (tried pkgconfig)
00:06:24.115  Run-time dependency openssl found: YES 3.1.1
00:06:24.115  Run-time dependency libpcap found: YES 1.10.4
00:06:24.115  Has header "pcap.h" with dependency libpcap: YES 
00:06:24.115  Compiler for C supports arguments -Wcast-qual: YES 
00:06:24.115  Compiler for C supports arguments -Wdeprecated: YES 
00:06:24.115  Compiler for C supports arguments -Wformat: YES 
00:06:24.115  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:06:24.115  Compiler for C supports arguments -Wformat-security: NO 
00:06:24.115  Compiler for C supports arguments -Wmissing-declarations: YES 
00:06:24.115  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:06:24.115  Compiler for C supports arguments -Wnested-externs: YES 
00:06:24.115  Compiler for C supports arguments -Wold-style-definition: YES 
00:06:24.115  Compiler for C supports arguments -Wpointer-arith: YES 
00:06:24.115  Compiler for C supports arguments -Wsign-compare: YES 
00:06:24.115  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:06:24.115  Compiler for C supports arguments -Wundef: YES 
00:06:24.115  Compiler for C supports arguments -Wwrite-strings: YES 
00:06:24.115  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:06:24.115  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:06:24.115  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:06:24.115  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:06:24.115  Program objdump found: YES (/usr/bin/objdump)
00:06:24.115  Compiler for C supports arguments -mavx512f: YES 
00:06:24.115  Checking if "AVX512 checking" compiles: YES 
00:06:24.115  Fetching value of define "__SSE4_2__" : 1 
00:06:24.115  Fetching value of define "__AES__" : 1 
00:06:24.115  Fetching value of define "__AVX__" : 1 
00:06:24.115  Fetching value of define "__AVX2__" : (undefined) 
00:06:24.115  Fetching value of define "__AVX512BW__" : (undefined) 
00:06:24.115  Fetching value of define "__AVX512CD__" : (undefined) 
00:06:24.115  Fetching value of define "__AVX512DQ__" : (undefined) 
00:06:24.115  Fetching value of define "__AVX512F__" : (undefined) 
00:06:24.115  Fetching value of define "__AVX512VL__" : (undefined) 
00:06:24.115  Fetching value of define "__PCLMUL__" : 1 
00:06:24.115  Fetching value of define "__RDRND__" : 1 
00:06:24.115  Fetching value of define "__RDSEED__" : (undefined) 
00:06:24.115  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:06:24.115  Fetching value of define "__znver1__" : (undefined) 
00:06:24.115  Fetching value of define "__znver2__" : (undefined) 
00:06:24.115  Fetching value of define "__znver3__" : (undefined) 
00:06:24.115  Fetching value of define "__znver4__" : (undefined) 
00:06:24.115  Library asan found: YES
00:06:24.115  Compiler for C supports arguments -Wno-format-truncation: YES 
00:06:24.115  Message: lib/log: Defining dependency "log"
00:06:24.115  Message: lib/kvargs: Defining dependency "kvargs"
00:06:24.115  Message: lib/telemetry: Defining dependency "telemetry"
00:06:24.115  Library rt found: YES
00:06:24.115  Checking for function "getentropy" : NO 
00:06:24.115  Message: lib/eal: Defining dependency "eal"
00:06:24.115  Message: lib/ring: Defining dependency "ring"
00:06:24.115  Message: lib/rcu: Defining dependency "rcu"
00:06:24.115  Message: lib/mempool: Defining dependency "mempool"
00:06:24.115  Message: lib/mbuf: Defining dependency "mbuf"
00:06:24.115  Fetching value of define "__PCLMUL__" : 1 (cached)
00:06:24.115  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:06:24.115  Compiler for C supports arguments -mpclmul: YES 
00:06:24.115  Compiler for C supports arguments -maes: YES 
00:06:24.115  Compiler for C supports arguments -mavx512f: YES (cached)
00:06:24.115  Compiler for C supports arguments -mavx512bw: YES 
00:06:24.115  Compiler for C supports arguments -mavx512dq: YES 
00:06:24.116  Compiler for C supports arguments -mavx512vl: YES 
00:06:24.116  Compiler for C supports arguments -mvpclmulqdq: YES 
00:06:24.116  Compiler for C supports arguments -mavx2: YES 
00:06:24.116  Compiler for C supports arguments -mavx: YES 
00:06:24.116  Message: lib/net: Defining dependency "net"
00:06:24.116  Message: lib/meter: Defining dependency "meter"
00:06:24.116  Message: lib/ethdev: Defining dependency "ethdev"
00:06:24.116  Message: lib/pci: Defining dependency "pci"
00:06:24.116  Message: lib/cmdline: Defining dependency "cmdline"
00:06:24.116  Message: lib/hash: Defining dependency "hash"
00:06:24.116  Message: lib/timer: Defining dependency "timer"
00:06:24.116  Message: lib/compressdev: Defining dependency "compressdev"
00:06:24.116  Message: lib/cryptodev: Defining dependency "cryptodev"
00:06:24.116  Message: lib/dmadev: Defining dependency "dmadev"
00:06:24.116  Compiler for C supports arguments -Wno-cast-qual: YES 
00:06:24.116  Message: lib/power: Defining dependency "power"
00:06:24.116  Message: lib/reorder: Defining dependency "reorder"
00:06:24.116  Message: lib/security: Defining dependency "security"
00:06:24.116  Has header "linux/userfaultfd.h" : YES 
00:06:24.116  Has header "linux/vduse.h" : YES 
00:06:24.116  Message: lib/vhost: Defining dependency "vhost"
00:06:24.116  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:06:24.116  Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary"
00:06:24.116  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:06:24.116  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:06:24.116  Compiler for C supports arguments -std=c11: YES 
00:06:24.116  Compiler for C supports arguments -Wno-strict-prototypes: YES 
00:06:24.116  Compiler for C supports arguments -D_BSD_SOURCE: YES 
00:06:24.116  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 
00:06:24.116  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 
00:06:24.116  Run-time dependency libmlx5 found: YES 1.24.46.0
00:06:24.116  Run-time dependency libibverbs found: YES 1.14.46.0
00:06:24.116  Library mtcr_ul found: NO
00:06:24.116  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 
00:06:24.116  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 
00:06:26.019  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 
00:06:26.019  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 
00:06:26.019  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 
00:06:26.019  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 
00:06:26.019  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 
00:06:26.019  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 
00:06:26.020  Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 
00:06:26.020  Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 
00:06:26.020  Configuring mlx5_autoconf.h using configuration
00:06:26.020  Message: drivers/common/mlx5: Defining dependency "common_mlx5"
00:06:26.020  Run-time dependency libcrypto found: YES 3.1.1
00:06:26.020  Library IPSec_MB found: YES
00:06:26.020  Fetching value of define "IMB_VERSION_STR" : "1.5.0" 
00:06:26.020  Message: drivers/common/qat: Defining dependency "common_qat"
00:06:26.020  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:06:26.020  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:06:26.020  Library IPSec_MB found: YES
00:06:26.020  Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached)
00:06:26.020  Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb"
00:06:26.020  Compiler for C supports arguments -std=c11: YES (cached)
00:06:26.020  Compiler for C supports arguments -Wno-strict-prototypes: YES (cached)
00:06:26.020  Compiler for C supports arguments -D_BSD_SOURCE: YES (cached)
00:06:26.020  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached)
00:06:26.020  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached)
00:06:26.020  Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5"
00:06:26.020  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:06:26.020  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:06:26.020  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:06:26.020  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:06:26.020  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:06:26.020  Program doxygen found: YES (/usr/local/bin/doxygen)
00:06:26.020  Configuring doxy-api-html.conf using configuration
00:06:26.020  Configuring doxy-api-man.conf using configuration
00:06:26.020  Program mandb found: YES (/usr/bin/mandb)
00:06:26.020  Program sphinx-build found: NO
00:06:26.020  Configuring rte_build_config.h using configuration
00:06:26.020  Message: 
00:06:26.020  =================
00:06:26.020  Applications Enabled
00:06:26.020  =================
00:06:26.020  
00:06:26.020  apps:
00:06:26.020  	
00:06:26.020  
00:06:26.020  Message: 
00:06:26.020  =================
00:06:26.020  Libraries Enabled
00:06:26.020  =================
00:06:26.020  
00:06:26.020  libs:
00:06:26.020  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:06:26.020  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:06:26.020  	cryptodev, dmadev, power, reorder, security, vhost, 
00:06:26.020  
00:06:26.020  Message: 
00:06:26.020  ===============
00:06:26.020  Drivers Enabled
00:06:26.020  ===============
00:06:26.020  
00:06:26.020  common:
00:06:26.020  	mlx5, qat, 
00:06:26.020  bus:
00:06:26.020  	auxiliary, pci, vdev, 
00:06:26.020  mempool:
00:06:26.020  	ring, 
00:06:26.020  dma:
00:06:26.020  	
00:06:26.020  net:
00:06:26.020  	
00:06:26.020  crypto:
00:06:26.020  	ipsec_mb, mlx5, 
00:06:26.020  compress:
00:06:26.020  	
00:06:26.020  vdpa:
00:06:26.020  	
00:06:26.020  
00:06:26.020  Message: 
00:06:26.020  =================
00:06:26.020  Content Skipped
00:06:26.020  =================
00:06:26.020  
00:06:26.020  apps:
00:06:26.020  	dumpcap:	explicitly disabled via build config
00:06:26.020  	graph:	explicitly disabled via build config
00:06:26.020  	pdump:	explicitly disabled via build config
00:06:26.020  	proc-info:	explicitly disabled via build config
00:06:26.020  	test-acl:	explicitly disabled via build config
00:06:26.020  	test-bbdev:	explicitly disabled via build config
00:06:26.020  	test-cmdline:	explicitly disabled via build config
00:06:26.020  	test-compress-perf:	explicitly disabled via build config
00:06:26.020  	test-crypto-perf:	explicitly disabled via build config
00:06:26.020  	test-dma-perf:	explicitly disabled via build config
00:06:26.020  	test-eventdev:	explicitly disabled via build config
00:06:26.020  	test-fib:	explicitly disabled via build config
00:06:26.020  	test-flow-perf:	explicitly disabled via build config
00:06:26.020  	test-gpudev:	explicitly disabled via build config
00:06:26.020  	test-mldev:	explicitly disabled via build config
00:06:26.020  	test-pipeline:	explicitly disabled via build config
00:06:26.020  	test-pmd:	explicitly disabled via build config
00:06:26.020  	test-regex:	explicitly disabled via build config
00:06:26.020  	test-sad:	explicitly disabled via build config
00:06:26.020  	test-security-perf:	explicitly disabled via build config
00:06:26.020  	
00:06:26.020  libs:
00:06:26.020  	argparse:	explicitly disabled via build config
00:06:26.020  	metrics:	explicitly disabled via build config
00:06:26.020  	acl:	explicitly disabled via build config
00:06:26.020  	bbdev:	explicitly disabled via build config
00:06:26.020  	bitratestats:	explicitly disabled via build config
00:06:26.020  	bpf:	explicitly disabled via build config
00:06:26.020  	cfgfile:	explicitly disabled via build config
00:06:26.020  	distributor:	explicitly disabled via build config
00:06:26.020  	efd:	explicitly disabled via build config
00:06:26.020  	eventdev:	explicitly disabled via build config
00:06:26.020  	dispatcher:	explicitly disabled via build config
00:06:26.020  	gpudev:	explicitly disabled via build config
00:06:26.020  	gro:	explicitly disabled via build config
00:06:26.020  	gso:	explicitly disabled via build config
00:06:26.020  	ip_frag:	explicitly disabled via build config
00:06:26.020  	jobstats:	explicitly disabled via build config
00:06:26.020  	latencystats:	explicitly disabled via build config
00:06:26.020  	lpm:	explicitly disabled via build config
00:06:26.020  	member:	explicitly disabled via build config
00:06:26.020  	pcapng:	explicitly disabled via build config
00:06:26.020  	rawdev:	explicitly disabled via build config
00:06:26.020  	regexdev:	explicitly disabled via build config
00:06:26.020  	mldev:	explicitly disabled via build config
00:06:26.020  	rib:	explicitly disabled via build config
00:06:26.021  	sched:	explicitly disabled via build config
00:06:26.021  	stack:	explicitly disabled via build config
00:06:26.021  	ipsec:	explicitly disabled via build config
00:06:26.021  	pdcp:	explicitly disabled via build config
00:06:26.021  	fib:	explicitly disabled via build config
00:06:26.021  	port:	explicitly disabled via build config
00:06:26.021  	pdump:	explicitly disabled via build config
00:06:26.021  	table:	explicitly disabled via build config
00:06:26.021  	pipeline:	explicitly disabled via build config
00:06:26.021  	graph:	explicitly disabled via build config
00:06:26.021  	node:	explicitly disabled via build config
00:06:26.021  	
00:06:26.021  drivers:
00:06:26.021  	common/cpt:	not in enabled drivers build config
00:06:26.021  	common/dpaax:	not in enabled drivers build config
00:06:26.021  	common/iavf:	not in enabled drivers build config
00:06:26.021  	common/idpf:	not in enabled drivers build config
00:06:26.021  	common/ionic:	not in enabled drivers build config
00:06:26.021  	common/mvep:	not in enabled drivers build config
00:06:26.021  	common/octeontx:	not in enabled drivers build config
00:06:26.021  	bus/cdx:	not in enabled drivers build config
00:06:26.021  	bus/dpaa:	not in enabled drivers build config
00:06:26.021  	bus/fslmc:	not in enabled drivers build config
00:06:26.021  	bus/ifpga:	not in enabled drivers build config
00:06:26.021  	bus/platform:	not in enabled drivers build config
00:06:26.021  	bus/uacce:	not in enabled drivers build config
00:06:26.021  	bus/vmbus:	not in enabled drivers build config
00:06:26.021  	common/cnxk:	not in enabled drivers build config
00:06:26.021  	common/nfp:	not in enabled drivers build config
00:06:26.021  	common/nitrox:	not in enabled drivers build config
00:06:26.021  	common/sfc_efx:	not in enabled drivers build config
00:06:26.021  	mempool/bucket:	not in enabled drivers build config
00:06:26.021  	mempool/cnxk:	not in enabled drivers build config
00:06:26.021  	mempool/dpaa:	not in enabled drivers build config
00:06:26.021  	mempool/dpaa2:	not in enabled drivers build config
00:06:26.021  	mempool/octeontx:	not in enabled drivers build config
00:06:26.021  	mempool/stack:	not in enabled drivers build config
00:06:26.021  	dma/cnxk:	not in enabled drivers build config
00:06:26.021  	dma/dpaa:	not in enabled drivers build config
00:06:26.021  	dma/dpaa2:	not in enabled drivers build config
00:06:26.021  	dma/hisilicon:	not in enabled drivers build config
00:06:26.021  	dma/idxd:	not in enabled drivers build config
00:06:26.021  	dma/ioat:	not in enabled drivers build config
00:06:26.021  	dma/skeleton:	not in enabled drivers build config
00:06:26.021  	net/af_packet:	not in enabled drivers build config
00:06:26.021  	net/af_xdp:	not in enabled drivers build config
00:06:26.021  	net/ark:	not in enabled drivers build config
00:06:26.021  	net/atlantic:	not in enabled drivers build config
00:06:26.021  	net/avp:	not in enabled drivers build config
00:06:26.021  	net/axgbe:	not in enabled drivers build config
00:06:26.021  	net/bnx2x:	not in enabled drivers build config
00:06:26.021  	net/bnxt:	not in enabled drivers build config
00:06:26.021  	net/bonding:	not in enabled drivers build config
00:06:26.021  	net/cnxk:	not in enabled drivers build config
00:06:26.021  	net/cpfl:	not in enabled drivers build config
00:06:26.021  	net/cxgbe:	not in enabled drivers build config
00:06:26.021  	net/dpaa:	not in enabled drivers build config
00:06:26.021  	net/dpaa2:	not in enabled drivers build config
00:06:26.021  	net/e1000:	not in enabled drivers build config
00:06:26.021  	net/ena:	not in enabled drivers build config
00:06:26.021  	net/enetc:	not in enabled drivers build config
00:06:26.021  	net/enetfec:	not in enabled drivers build config
00:06:26.021  	net/enic:	not in enabled drivers build config
00:06:26.021  	net/failsafe:	not in enabled drivers build config
00:06:26.021  	net/fm10k:	not in enabled drivers build config
00:06:26.021  	net/gve:	not in enabled drivers build config
00:06:26.021  	net/hinic:	not in enabled drivers build config
00:06:26.021  	net/hns3:	not in enabled drivers build config
00:06:26.021  	net/i40e:	not in enabled drivers build config
00:06:26.021  	net/iavf:	not in enabled drivers build config
00:06:26.021  	net/ice:	not in enabled drivers build config
00:06:26.021  	net/idpf:	not in enabled drivers build config
00:06:26.021  	net/igc:	not in enabled drivers build config
00:06:26.021  	net/ionic:	not in enabled drivers build config
00:06:26.021  	net/ipn3ke:	not in enabled drivers build config
00:06:26.021  	net/ixgbe:	not in enabled drivers build config
00:06:26.021  	net/mana:	not in enabled drivers build config
00:06:26.021  	net/memif:	not in enabled drivers build config
00:06:26.021  	net/mlx4:	not in enabled drivers build config
00:06:26.021  	net/mlx5:	not in enabled drivers build config
00:06:26.021  	net/mvneta:	not in enabled drivers build config
00:06:26.021  	net/mvpp2:	not in enabled drivers build config
00:06:26.021  	net/netvsc:	not in enabled drivers build config
00:06:26.021  	net/nfb:	not in enabled drivers build config
00:06:26.021  	net/nfp:	not in enabled drivers build config
00:06:26.021  	net/ngbe:	not in enabled drivers build config
00:06:26.021  	net/null:	not in enabled drivers build config
00:06:26.021  	net/octeontx:	not in enabled drivers build config
00:06:26.021  	net/octeon_ep:	not in enabled drivers build config
00:06:26.021  	net/pcap:	not in enabled drivers build config
00:06:26.021  	net/pfe:	not in enabled drivers build config
00:06:26.021  	net/qede:	not in enabled drivers build config
00:06:26.021  	net/ring:	not in enabled drivers build config
00:06:26.021  	net/sfc:	not in enabled drivers build config
00:06:26.021  	net/softnic:	not in enabled drivers build config
00:06:26.021  	net/tap:	not in enabled drivers build config
00:06:26.021  	net/thunderx:	not in enabled drivers build config
00:06:26.021  	net/txgbe:	not in enabled drivers build config
00:06:26.021  	net/vdev_netvsc:	not in enabled drivers build config
00:06:26.021  	net/vhost:	not in enabled drivers build config
00:06:26.021  	net/virtio:	not in enabled drivers build config
00:06:26.021  	net/vmxnet3:	not in enabled drivers build config
00:06:26.021  	raw/*:	missing internal dependency, "rawdev"
00:06:26.021  	crypto/armv8:	not in enabled drivers build config
00:06:26.021  	crypto/bcmfs:	not in enabled drivers build config
00:06:26.021  	crypto/caam_jr:	not in enabled drivers build config
00:06:26.021  	crypto/ccp:	not in enabled drivers build config
00:06:26.021  	crypto/cnxk:	not in enabled drivers build config
00:06:26.021  	crypto/dpaa_sec:	not in enabled drivers build config
00:06:26.021  	crypto/dpaa2_sec:	not in enabled drivers build config
00:06:26.021  	crypto/mvsam:	not in enabled drivers build config
00:06:26.021  	crypto/nitrox:	not in enabled drivers build config
00:06:26.021  	crypto/null:	not in enabled drivers build config
00:06:26.021  	crypto/octeontx:	not in enabled drivers build config
00:06:26.021  	crypto/openssl:	not in enabled drivers build config
00:06:26.021  	crypto/scheduler:	not in enabled drivers build config
00:06:26.021  	crypto/uadk:	not in enabled drivers build config
00:06:26.021  	crypto/virtio:	not in enabled drivers build config
00:06:26.021  	compress/isal:	not in enabled drivers build config
00:06:26.021  	compress/mlx5:	not in enabled drivers build config
00:06:26.021  	compress/nitrox:	not in enabled drivers build config
00:06:26.021  	compress/octeontx:	not in enabled drivers build config
00:06:26.021  	compress/zlib:	not in enabled drivers build config
00:06:26.021  	regex/*:	missing internal dependency, "regexdev"
00:06:26.021  	ml/*:	missing internal dependency, "mldev"
00:06:26.021  	vdpa/ifc:	not in enabled drivers build config
00:06:26.021  	vdpa/mlx5:	not in enabled drivers build config
00:06:26.021  	vdpa/nfp:	not in enabled drivers build config
00:06:26.021  	vdpa/sfc:	not in enabled drivers build config
00:06:26.021  	event/*:	missing internal dependency, "eventdev"
00:06:26.021  	baseband/*:	missing internal dependency, "bbdev"
00:06:26.021  	gpu/*:	missing internal dependency, "gpudev"
00:06:26.021  	
00:06:26.021  
00:06:26.604  Build targets in project: 107
00:06:26.604  
00:06:26.604  DPDK 24.03.0
00:06:26.604  
00:06:26.604    User defined options
00:06:26.604      buildtype          : debug
00:06:26.604      default_library    : shared
00:06:26.604      libdir             : lib
00:06:26.604      prefix             : /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:06:26.604      b_sanitize         : address
00:06:26.604      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -fPIC -Werror 
00:06:26.604      c_link_args        : -L/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib
00:06:26.604      cpu_instruction_set: native
00:06:26.604      disable_apps       : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev
00:06:26.604      disable_libs       : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev
00:06:26.604      enable_docs        : false
00:06:26.604      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb
00:06:26.604      enable_kmods       : false
00:06:26.604      max_lcores         : 128
00:06:26.604      tests              : false
00:06:26.604  
00:06:26.604  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:06:27.176  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp'
00:06:27.176  [1/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:06:27.176  [2/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:06:27.176  [3/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:06:27.176  [4/363] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:06:27.176  [5/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:06:27.176  [6/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:06:27.176  [7/363] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:06:27.176  [8/363] Linking static target lib/librte_kvargs.a
00:06:27.176  [9/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:06:27.176  [10/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:06:27.176  [11/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:06:27.176  [12/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:06:27.176  [13/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:06:27.176  [14/363] Compiling C object lib/librte_log.a.p/log_log.c.o
00:06:27.176  [15/363] Linking static target lib/librte_log.a
00:06:27.176  [16/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:06:27.749  [17/363] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:06:28.008  [18/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:06:28.008  [19/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:06:28.008  [20/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:06:28.008  [21/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:06:28.008  [22/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:06:28.008  [23/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:06:28.008  [24/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:06:28.008  [25/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:06:28.008  [26/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:06:28.008  [27/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:06:28.008  [28/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:06:28.008  [29/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:06:28.008  [30/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:06:28.008  [31/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:06:28.008  [32/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:06:28.008  [33/363] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:06:28.008  [34/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:06:28.008  [35/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:06:28.008  [36/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:06:28.009  [37/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:06:28.009  [38/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:06:28.009  [39/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:06:28.009  [40/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:06:28.009  [41/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:06:28.009  [42/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:06:28.009  [43/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:06:28.009  [44/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:06:28.009  [45/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:06:28.009  [46/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:06:28.009  [47/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:06:28.009  [48/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:06:28.009  [49/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:06:28.009  [50/363] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:06:28.272  [51/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:06:28.272  [52/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:06:28.272  [53/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:06:28.272  [54/363] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:06:28.272  [55/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:06:28.272  [56/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:06:28.272  [57/363] Linking static target lib/librte_telemetry.a
00:06:28.272  [58/363] Linking target lib/librte_log.so.24.1
00:06:28.272  [59/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:06:28.272  [60/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:06:28.272  [61/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:06:28.272  [62/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:06:28.272  [63/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:06:28.272  [64/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:06:28.272  [65/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:06:28.534  [66/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:06:28.534  [67/363] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:06:28.534  [68/363] Linking target lib/librte_kvargs.so.24.1
00:06:28.534  [69/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:06:28.795  [70/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:06:28.795  [71/363] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:06:28.795  [72/363] Linking static target lib/librte_pci.a
00:06:28.795  [73/363] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:06:28.795  [74/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:06:28.795  [75/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:06:29.055  [76/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:06:29.055  [77/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:06:29.055  [78/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:06:29.055  [79/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:06:29.055  [80/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:06:29.055  [81/363] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:06:29.055  [82/363] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:06:29.055  [83/363] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:06:29.055  [84/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:06:29.055  [85/363] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:06:29.055  [86/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:06:29.055  [87/363] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:06:29.055  [88/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:06:29.055  [89/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:06:29.055  [90/363] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:06:29.055  [91/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:06:29.055  [92/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:06:29.055  [93/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:06:29.055  [94/363] Linking static target lib/librte_meter.a
00:06:29.055  [95/363] Linking static target lib/librte_ring.a
00:06:29.055  [96/363] Linking static target lib/net/libnet_crc_avx512_lib.a
00:06:29.055  [97/363] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:06:29.055  [98/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:06:29.055  [99/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:06:29.055  [100/363] Linking target lib/librte_telemetry.so.24.1
00:06:29.055  [101/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:06:29.055  [102/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:06:29.055  [103/363] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:06:29.055  [104/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:06:29.318  [105/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:06:29.318  [106/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:06:29.318  [107/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:06:29.318  [108/363] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:06:29.318  [109/363] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:06:29.318  [110/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:06:29.318  [111/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:06:29.318  [112/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:06:29.318  [113/363] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:06:29.318  [114/363] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:06:29.318  [115/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:06:29.318  [116/363] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:06:29.318  [117/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:06:29.318  [118/363] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:06:29.580  [119/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:06:29.580  [120/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:06:29.580  [121/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:06:29.580  [122/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:06:29.580  [123/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:06:29.580  [124/363] Linking static target lib/librte_mempool.a
00:06:29.580  [125/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:06:29.580  [126/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:06:29.580  [127/363] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:06:29.580  [128/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:06:29.580  [129/363] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:06:29.580  [130/363] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:06:29.580  [131/363] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:06:29.580  [132/363] Linking static target lib/librte_rcu.a
00:06:29.580  [133/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:06:29.841  [134/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:06:29.841  [135/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o
00:06:29.841  [136/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:06:29.841  [137/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:06:29.841  [138/363] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:06:30.100  [139/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:06:30.100  [140/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:06:30.100  [141/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:06:30.100  [142/363] Linking static target lib/librte_cmdline.a
00:06:30.100  [143/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:06:30.101  [144/363] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:06:30.101  [145/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:06:30.101  [146/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:06:30.101  [147/363] Linking static target lib/librte_eal.a
00:06:30.101  [148/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:06:30.101  [149/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:06:30.101  [150/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:06:30.101  [151/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:06:30.101  [152/363] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:06:30.101  [153/363] Linking static target lib/librte_timer.a
00:06:30.364  [154/363] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:06:30.364  [155/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:06:30.364  [156/363] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:06:30.364  [157/363] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:06:30.364  [158/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:06:30.627  [159/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:06:30.627  [160/363] Linking static target lib/librte_dmadev.a
00:06:30.627  [161/363] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:06:30.627  [162/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o
00:06:30.627  [163/363] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:06:30.627  [164/363] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:06:30.627  [165/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o
00:06:30.627  [166/363] Linking static target drivers/libtmp_rte_bus_auxiliary.a
00:06:30.627  [167/363] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:06:30.893  [168/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:06:30.893  [169/363] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:06:30.893  [170/363] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:06:30.893  [171/363] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:06:30.893  [172/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:06:30.893  [173/363] Linking static target lib/librte_net.a
00:06:30.893  [174/363] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:06:30.893  [175/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o
00:06:30.893  [176/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:06:31.155  [177/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o
00:06:31.155  [178/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:06:31.155  [179/363] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:06:31.155  [180/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:06:31.155  [181/363] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:06:31.155  [182/363] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:06:31.155  [183/363] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command
00:06:31.155  [184/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:06:31.155  [185/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:06:31.155  [186/363] Linking static target drivers/libtmp_rte_bus_vdev.a
00:06:31.155  [187/363] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:06:31.155  [188/363] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:06:31.155  [189/363] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:06:31.155  [190/363] Linking static target drivers/librte_bus_auxiliary.a
00:06:31.420  [191/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:06:31.420  [192/363] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:06:31.420  [193/363] Linking static target drivers/libtmp_rte_bus_pci.a
00:06:31.420  [194/363] Linking static target lib/librte_power.a
00:06:31.420  [195/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o
00:06:31.420  [196/363] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:31.420  [197/363] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:06:31.420  [198/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o
00:06:31.420  [199/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o
00:06:31.420  [200/363] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:06:31.420  [201/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o
00:06:31.678  [202/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o
00:06:31.678  [203/363] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:06:31.678  [204/363] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output)
00:06:31.678  [205/363] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:06:31.678  [206/363] Linking static target drivers/librte_bus_vdev.a
00:06:31.678  [207/363] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:06:31.678  [208/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o
00:06:31.678  [209/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o
00:06:31.678  [210/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:06:31.678  [211/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o
00:06:31.678  [212/363] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:06:31.937  [213/363] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:06:31.937  [214/363] Linking static target drivers/librte_bus_pci.a
00:06:31.937  [215/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o
00:06:31.937  [216/363] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:06:31.937  [217/363] Linking static target lib/librte_hash.a
00:06:31.937  [218/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o
00:06:31.937  [219/363] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:06:31.937  [220/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o
00:06:31.937  [221/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:06:31.937  [222/363] Linking static target lib/librte_compressdev.a
00:06:31.937  [223/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o
00:06:31.937  [224/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o
00:06:31.937  [225/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o
00:06:31.937  [226/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o
00:06:32.202  [227/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o
00:06:32.202  [228/363] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:32.202  [229/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o
00:06:32.202  [230/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o
00:06:32.202  [231/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o
00:06:32.202  [232/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o
00:06:32.202  [233/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:06:32.202  [234/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o
00:06:32.202  [235/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o
00:06:32.202  [236/363] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:06:32.202  [237/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o
00:06:32.202  [238/363] Linking static target lib/librte_reorder.a
00:06:32.459  [239/363] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:06:32.459  [240/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o
00:06:32.459  [241/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o
00:06:32.717  [242/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o
00:06:32.717  [243/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o
00:06:32.717  [244/363] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:06:32.717  [245/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o
00:06:32.717  [246/363] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:32.717  [247/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o
00:06:32.717  [248/363] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:06:32.717  [249/363] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:06:33.034  [250/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o
00:06:33.034  [251/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o
00:06:33.034  [252/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o
00:06:33.034  [253/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o
00:06:33.291  [254/363] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:06:33.291  [255/363] Linking static target lib/librte_security.a
00:06:33.291  [256/363] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:06:33.291  [257/363] Linking static target drivers/libtmp_rte_mempool_ring.a
00:06:33.291  [258/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o
00:06:33.291  [259/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o
00:06:33.550  [260/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:06:33.550  [261/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o
00:06:33.550  [262/363] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:06:33.550  [263/363] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:06:33.550  [264/363] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:06:33.550  [265/363] Linking static target drivers/librte_mempool_ring.a
00:06:33.550  [266/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o
00:06:33.807  [267/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o
00:06:33.807  [268/363] Linking static target drivers/libtmp_rte_crypto_mlx5.a
00:06:33.807  [269/363] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:06:34.064  [270/363] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command
00:06:34.064  [271/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o
00:06:34.064  [272/363] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:06:34.064  [273/363] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:06:34.064  [274/363] Linking static target drivers/librte_crypto_mlx5.a
00:06:34.064  [275/363] Linking static target drivers/libtmp_rte_common_mlx5.a
00:06:34.321  [276/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o
00:06:34.321  [277/363] Generating drivers/rte_common_mlx5.pmd.c with a custom command
00:06:34.321  [278/363] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:06:34.321  [279/363] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:06:34.321  [280/363] Linking static target drivers/librte_common_mlx5.a
00:06:34.886  [281/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o
00:06:34.887  [282/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:06:34.887  [283/363] Linking static target lib/librte_mbuf.a
00:06:34.887  [284/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o
00:06:35.452  [285/363] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:06:35.452  [286/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o
00:06:35.452  [287/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o
00:06:35.452  [288/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:06:35.452  [289/363] Linking static target lib/librte_cryptodev.a
00:06:36.017  [290/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:06:36.017  [291/363] Linking static target lib/librte_ethdev.a
00:06:36.017  [292/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o
00:06:36.583  [293/363] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:37.149  [294/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o
00:06:37.149  [295/363] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a
00:06:37.407  [296/363] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output)
00:06:37.407  [297/363] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command
00:06:37.407  [298/363] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:06:37.407  [299/363] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:06:37.407  [300/363] Linking static target drivers/librte_crypto_ipsec_mb.a
00:06:37.407  [301/363] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:06:37.665  [302/363] Linking target lib/librte_eal.so.24.1
00:06:37.665  [303/363] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:06:37.923  [304/363] Linking target lib/librte_meter.so.24.1
00:06:37.923  [305/363] Linking target lib/librte_ring.so.24.1
00:06:37.923  [306/363] Linking target lib/librte_pci.so.24.1
00:06:37.923  [307/363] Linking target lib/librte_timer.so.24.1
00:06:37.923  [308/363] Linking target drivers/librte_bus_auxiliary.so.24.1
00:06:37.923  [309/363] Linking target drivers/librte_bus_vdev.so.24.1
00:06:37.923  [310/363] Linking target lib/librte_dmadev.so.24.1
00:06:37.923  [311/363] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:06:37.923  [312/363] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:06:37.923  [313/363] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:06:37.923  [314/363] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:06:37.923  [315/363] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:06:37.923  [316/363] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols
00:06:37.923  [317/363] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols
00:06:37.923  [318/363] Linking target lib/librte_rcu.so.24.1
00:06:37.923  [319/363] Linking target lib/librte_mempool.so.24.1
00:06:37.923  [320/363] Linking target drivers/librte_bus_pci.so.24.1
00:06:38.196  [321/363] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:06:38.196  [322/363] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols
00:06:38.196  [323/363] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:06:38.197  [324/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o
00:06:38.197  [325/363] Linking target drivers/librte_mempool_ring.so.24.1
00:06:38.197  [326/363] Linking target lib/librte_mbuf.so.24.1
00:06:38.197  [327/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o
00:06:38.197  [328/363] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:06:38.461  [329/363] Linking target lib/librte_reorder.so.24.1
00:06:38.461  [330/363] Linking target lib/librte_compressdev.so.24.1
00:06:38.461  [331/363] Linking target lib/librte_net.so.24.1
00:06:38.461  [332/363] Linking target lib/librte_cryptodev.so.24.1
00:06:38.461  [333/363] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols
00:06:38.461  [334/363] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:06:38.461  [335/363] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:06:38.461  [336/363] Linking target lib/librte_cmdline.so.24.1
00:06:38.461  [337/363] Linking target lib/librte_security.so.24.1
00:06:38.461  [338/363] Linking target lib/librte_hash.so.24.1
00:06:38.719  [339/363] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols
00:06:38.719  [340/363] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:06:38.719  [341/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o
00:06:38.719  [342/363] Linking target drivers/librte_common_mlx5.so.24.1
00:06:38.977  [343/363] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols
00:06:38.977  [344/363] Linking target drivers/librte_crypto_ipsec_mb.so.24.1
00:06:38.977  [345/363] Linking target drivers/librte_crypto_mlx5.so.24.1
00:06:39.235  [346/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:06:40.186  [347/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o
00:06:40.444  [348/363] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:40.444  [349/363] Linking target lib/librte_ethdev.so.24.1
00:06:40.444  [350/363] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:06:40.702  [351/363] Linking target lib/librte_power.so.24.1
00:06:41.637  [352/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o
00:07:03.559  [353/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o
00:07:03.559  [354/363] Linking static target drivers/libtmp_rte_common_qat.a
00:07:03.559  [355/363] Generating drivers/rte_common_qat.pmd.c with a custom command
00:07:03.559  [356/363] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o
00:07:03.559  [357/363] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o
00:07:03.559  [358/363] Linking static target drivers/librte_common_qat.a
00:07:03.559  [359/363] Linking target drivers/librte_common_qat.so.24.1
00:07:07.754  [360/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:07:07.754  [361/363] Linking static target lib/librte_vhost.a
00:07:08.694  [362/363] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:07:08.694  [363/363] Linking target lib/librte_vhost.so.24.1
00:07:08.695  INFO: autodetecting backend as ninja
00:07:08.695  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp -j 48
00:07:10.070    CC lib/ut_mock/mock.o
00:07:10.070    CC lib/log/log.o
00:07:10.070    CC lib/log/log_flags.o
00:07:10.070    CC lib/log/log_deprecated.o
00:07:10.070    CC lib/ut/ut.o
00:07:10.070    LIB libspdk_ut.a
00:07:10.070    LIB libspdk_ut_mock.a
00:07:10.070    LIB libspdk_log.a
00:07:10.070    SO libspdk_ut.so.2.0
00:07:10.070    SO libspdk_ut_mock.so.6.0
00:07:10.070    SO libspdk_log.so.7.1
00:07:10.070    SYMLINK libspdk_ut.so
00:07:10.070    SYMLINK libspdk_ut_mock.so
00:07:10.070    SYMLINK libspdk_log.so
00:07:10.328    CXX lib/trace_parser/trace.o
00:07:10.328    CC lib/dma/dma.o
00:07:10.328    CC lib/ioat/ioat.o
00:07:10.328    CC lib/util/base64.o
00:07:10.328    CC lib/util/bit_array.o
00:07:10.328    CC lib/util/cpuset.o
00:07:10.328    CC lib/util/crc16.o
00:07:10.328    CC lib/util/crc32.o
00:07:10.328    CC lib/util/crc32c.o
00:07:10.328    CC lib/util/crc32_ieee.o
00:07:10.328    CC lib/util/crc64.o
00:07:10.328    CC lib/util/dif.o
00:07:10.328    CC lib/util/fd.o
00:07:10.328    CC lib/util/fd_group.o
00:07:10.328    CC lib/util/file.o
00:07:10.328    CC lib/util/hexlify.o
00:07:10.328    CC lib/util/iov.o
00:07:10.328    CC lib/util/math.o
00:07:10.328    CC lib/util/net.o
00:07:10.328    CC lib/util/pipe.o
00:07:10.328    CC lib/util/strerror_tls.o
00:07:10.328    CC lib/util/string.o
00:07:10.328    CC lib/util/uuid.o
00:07:10.328    CC lib/util/xor.o
00:07:10.328    CC lib/util/zipf.o
00:07:10.328    CC lib/util/md5.o
00:07:10.329    CC lib/vfio_user/host/vfio_user_pci.o
00:07:10.329    CC lib/vfio_user/host/vfio_user.o
00:07:10.587    LIB libspdk_dma.a
00:07:10.587    SO libspdk_dma.so.5.0
00:07:10.844    SYMLINK libspdk_dma.so
00:07:10.844    LIB libspdk_vfio_user.a
00:07:10.844    SO libspdk_vfio_user.so.5.0
00:07:10.844    LIB libspdk_ioat.a
00:07:10.844    SO libspdk_ioat.so.7.0
00:07:10.844    SYMLINK libspdk_vfio_user.so
00:07:10.844    SYMLINK libspdk_ioat.so
00:07:11.102    LIB libspdk_util.a
00:07:11.361    SO libspdk_util.so.10.1
00:07:11.361    SYMLINK libspdk_util.so
00:07:11.361    LIB libspdk_trace_parser.a
00:07:11.619    SO libspdk_trace_parser.so.6.0
00:07:11.620    CC lib/rdma_utils/rdma_utils.o
00:07:11.620    CC lib/conf/conf.o
00:07:11.620    CC lib/idxd/idxd.o
00:07:11.620    CC lib/env_dpdk/env.o
00:07:11.620    CC lib/idxd/idxd_user.o
00:07:11.620    CC lib/env_dpdk/memory.o
00:07:11.620    CC lib/idxd/idxd_kernel.o
00:07:11.620    CC lib/env_dpdk/pci.o
00:07:11.620    CC lib/env_dpdk/init.o
00:07:11.620    CC lib/json/json_parse.o
00:07:11.620    CC lib/vmd/vmd.o
00:07:11.620    CC lib/env_dpdk/threads.o
00:07:11.620    CC lib/json/json_util.o
00:07:11.620    CC lib/env_dpdk/pci_ioat.o
00:07:11.620    CC lib/vmd/led.o
00:07:11.620    CC lib/json/json_write.o
00:07:11.620    CC lib/env_dpdk/pci_virtio.o
00:07:11.620    CC lib/env_dpdk/pci_vmd.o
00:07:11.620    CC lib/env_dpdk/pci_idxd.o
00:07:11.620    CC lib/env_dpdk/pci_event.o
00:07:11.620    CC lib/env_dpdk/sigbus_handler.o
00:07:11.620    CC lib/env_dpdk/pci_dpdk.o
00:07:11.620    CC lib/env_dpdk/pci_dpdk_2207.o
00:07:11.620    CC lib/env_dpdk/pci_dpdk_2211.o
00:07:11.620    SYMLINK libspdk_trace_parser.so
00:07:11.878    LIB libspdk_conf.a
00:07:11.878    SO libspdk_conf.so.6.0
00:07:11.878    LIB libspdk_rdma_utils.a
00:07:11.878    SYMLINK libspdk_conf.so
00:07:11.878    SO libspdk_rdma_utils.so.1.0
00:07:11.878    LIB libspdk_json.a
00:07:11.878    SO libspdk_json.so.6.0
00:07:12.141    SYMLINK libspdk_rdma_utils.so
00:07:12.141    SYMLINK libspdk_json.so
00:07:12.141    CC lib/rdma_provider/common.o
00:07:12.141    CC lib/rdma_provider/rdma_provider_verbs.o
00:07:12.141    CC lib/jsonrpc/jsonrpc_server.o
00:07:12.141    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:07:12.141    CC lib/jsonrpc/jsonrpc_client.o
00:07:12.141    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:07:12.400    LIB libspdk_rdma_provider.a
00:07:12.400    LIB libspdk_idxd.a
00:07:12.400    SO libspdk_rdma_provider.so.7.0
00:07:12.400    SO libspdk_idxd.so.12.1
00:07:12.658    SYMLINK libspdk_rdma_provider.so
00:07:12.658    LIB libspdk_jsonrpc.a
00:07:12.658    SYMLINK libspdk_idxd.so
00:07:12.658    SO libspdk_jsonrpc.so.6.0
00:07:12.658    LIB libspdk_vmd.a
00:07:12.658    SO libspdk_vmd.so.6.0
00:07:12.658    SYMLINK libspdk_jsonrpc.so
00:07:12.658    SYMLINK libspdk_vmd.so
00:07:12.916    CC lib/rpc/rpc.o
00:07:12.916    LIB libspdk_rpc.a
00:07:13.174    SO libspdk_rpc.so.6.0
00:07:13.174    SYMLINK libspdk_rpc.so
00:07:13.174    CC lib/keyring/keyring.o
00:07:13.174    CC lib/keyring/keyring_rpc.o
00:07:13.174    CC lib/notify/notify.o
00:07:13.174    CC lib/trace/trace.o
00:07:13.174    CC lib/notify/notify_rpc.o
00:07:13.174    CC lib/trace/trace_flags.o
00:07:13.174    CC lib/trace/trace_rpc.o
00:07:13.431    LIB libspdk_notify.a
00:07:13.431    SO libspdk_notify.so.6.0
00:07:13.431    SYMLINK libspdk_notify.so
00:07:13.431    LIB libspdk_keyring.a
00:07:13.689    LIB libspdk_trace.a
00:07:13.689    SO libspdk_keyring.so.2.0
00:07:13.689    SO libspdk_trace.so.11.0
00:07:13.689    SYMLINK libspdk_keyring.so
00:07:13.689    SYMLINK libspdk_trace.so
00:07:13.946    CC lib/thread/thread.o
00:07:13.946    CC lib/thread/iobuf.o
00:07:13.946    CC lib/sock/sock.o
00:07:13.946    CC lib/sock/sock_rpc.o
00:07:14.203    LIB libspdk_sock.a
00:07:14.461    SO libspdk_sock.so.10.0
00:07:14.461    SYMLINK libspdk_sock.so
00:07:14.461    CC lib/nvme/nvme_ctrlr_cmd.o
00:07:14.461    CC lib/nvme/nvme_ctrlr.o
00:07:14.461    CC lib/nvme/nvme_fabric.o
00:07:14.461    CC lib/nvme/nvme_ns_cmd.o
00:07:14.461    CC lib/nvme/nvme_ns.o
00:07:14.461    CC lib/nvme/nvme_pcie_common.o
00:07:14.461    CC lib/nvme/nvme_pcie.o
00:07:14.461    CC lib/nvme/nvme_qpair.o
00:07:14.461    CC lib/nvme/nvme.o
00:07:14.461    CC lib/nvme/nvme_quirks.o
00:07:14.461    CC lib/nvme/nvme_transport.o
00:07:14.461    CC lib/nvme/nvme_discovery.o
00:07:14.461    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:07:14.461    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:07:14.461    CC lib/nvme/nvme_tcp.o
00:07:14.461    CC lib/nvme/nvme_opal.o
00:07:14.461    CC lib/nvme/nvme_io_msg.o
00:07:14.461    CC lib/nvme/nvme_poll_group.o
00:07:14.461    CC lib/nvme/nvme_zns.o
00:07:14.461    CC lib/nvme/nvme_stubs.o
00:07:14.461    CC lib/nvme/nvme_auth.o
00:07:14.461    CC lib/nvme/nvme_cuse.o
00:07:14.461    CC lib/nvme/nvme_vfio_user.o
00:07:14.461    CC lib/nvme/nvme_rdma.o
00:07:14.718    LIB libspdk_env_dpdk.a
00:07:14.719    SO libspdk_env_dpdk.so.15.1
00:07:14.976    SYMLINK libspdk_env_dpdk.so
00:07:15.908    LIB libspdk_thread.a
00:07:15.908    SO libspdk_thread.so.11.0
00:07:15.908    SYMLINK libspdk_thread.so
00:07:16.165    CC lib/vfu_tgt/tgt_endpoint.o
00:07:16.165    CC lib/accel/accel.o
00:07:16.165    CC lib/fsdev/fsdev.o
00:07:16.165    CC lib/virtio/virtio.o
00:07:16.165    CC lib/blob/blobstore.o
00:07:16.165    CC lib/vfu_tgt/tgt_rpc.o
00:07:16.165    CC lib/fsdev/fsdev_io.o
00:07:16.165    CC lib/init/json_config.o
00:07:16.165    CC lib/accel/accel_rpc.o
00:07:16.165    CC lib/fsdev/fsdev_rpc.o
00:07:16.165    CC lib/blob/request.o
00:07:16.165    CC lib/accel/accel_sw.o
00:07:16.165    CC lib/virtio/virtio_vhost_user.o
00:07:16.165    CC lib/init/subsystem.o
00:07:16.165    CC lib/virtio/virtio_vfio_user.o
00:07:16.165    CC lib/blob/zeroes.o
00:07:16.165    CC lib/init/subsystem_rpc.o
00:07:16.165    CC lib/virtio/virtio_pci.o
00:07:16.165    CC lib/blob/blob_bs_dev.o
00:07:16.165    CC lib/init/rpc.o
00:07:16.424    LIB libspdk_init.a
00:07:16.424    SO libspdk_init.so.6.0
00:07:16.682    SYMLINK libspdk_init.so
00:07:16.682    LIB libspdk_vfu_tgt.a
00:07:16.682    LIB libspdk_virtio.a
00:07:16.682    SO libspdk_vfu_tgt.so.3.0
00:07:16.682    SO libspdk_virtio.so.7.0
00:07:16.682    SYMLINK libspdk_vfu_tgt.so
00:07:16.682    SYMLINK libspdk_virtio.so
00:07:16.682    CC lib/event/app.o
00:07:16.682    CC lib/event/reactor.o
00:07:16.682    CC lib/event/log_rpc.o
00:07:16.682    CC lib/event/app_rpc.o
00:07:16.682    CC lib/event/scheduler_static.o
00:07:16.940    LIB libspdk_fsdev.a
00:07:16.940    SO libspdk_fsdev.so.2.0
00:07:17.199    SYMLINK libspdk_fsdev.so
00:07:17.199    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:07:17.457    LIB libspdk_event.a
00:07:17.457    SO libspdk_event.so.14.0
00:07:17.457    SYMLINK libspdk_event.so
00:07:17.716    LIB libspdk_nvme.a
00:07:17.716    LIB libspdk_accel.a
00:07:17.716    SO libspdk_accel.so.16.0
00:07:17.716    SO libspdk_nvme.so.15.0
00:07:17.716    SYMLINK libspdk_accel.so
00:07:17.975    CC lib/bdev/bdev.o
00:07:17.975    CC lib/bdev/bdev_rpc.o
00:07:17.975    CC lib/bdev/bdev_zone.o
00:07:17.975    CC lib/bdev/part.o
00:07:17.975    CC lib/bdev/scsi_nvme.o
00:07:17.975    SYMLINK libspdk_nvme.so
00:07:18.234    LIB libspdk_fuse_dispatcher.a
00:07:18.234    SO libspdk_fuse_dispatcher.so.1.0
00:07:18.234    SYMLINK libspdk_fuse_dispatcher.so
00:07:20.766    LIB libspdk_blob.a
00:07:20.766    SO libspdk_blob.so.12.0
00:07:20.766    SYMLINK libspdk_blob.so
00:07:20.766    CC lib/blobfs/blobfs.o
00:07:20.766    CC lib/blobfs/tree.o
00:07:20.766    CC lib/lvol/lvol.o
00:07:21.701    LIB libspdk_bdev.a
00:07:21.701    SO libspdk_bdev.so.17.0
00:07:21.701    SYMLINK libspdk_bdev.so
00:07:21.701    CC lib/nvmf/ctrlr.o
00:07:21.701    CC lib/scsi/dev.o
00:07:21.701    CC lib/ublk/ublk.o
00:07:21.701    CC lib/nbd/nbd.o
00:07:21.701    CC lib/nbd/nbd_rpc.o
00:07:21.701    CC lib/scsi/lun.o
00:07:21.701    CC lib/ublk/ublk_rpc.o
00:07:21.701    CC lib/ftl/ftl_core.o
00:07:21.701    CC lib/scsi/port.o
00:07:21.701    CC lib/nvmf/ctrlr_discovery.o
00:07:21.701    CC lib/nvmf/ctrlr_bdev.o
00:07:21.701    CC lib/scsi/scsi.o
00:07:21.701    CC lib/nvmf/subsystem.o
00:07:21.701    CC lib/ftl/ftl_init.o
00:07:21.701    CC lib/ftl/ftl_layout.o
00:07:21.701    CC lib/scsi/scsi_bdev.o
00:07:21.701    CC lib/nvmf/nvmf.o
00:07:21.701    CC lib/nvmf/nvmf_rpc.o
00:07:21.701    CC lib/scsi/scsi_pr.o
00:07:21.701    CC lib/ftl/ftl_debug.o
00:07:21.701    CC lib/nvmf/transport.o
00:07:21.701    CC lib/scsi/scsi_rpc.o
00:07:21.701    CC lib/ftl/ftl_io.o
00:07:21.701    CC lib/ftl/ftl_sb.o
00:07:21.701    CC lib/nvmf/tcp.o
00:07:21.701    CC lib/scsi/task.o
00:07:21.701    CC lib/nvmf/stubs.o
00:07:21.701    CC lib/ftl/ftl_l2p.o
00:07:21.701    CC lib/nvmf/mdns_server.o
00:07:21.701    CC lib/ftl/ftl_l2p_flat.o
00:07:21.701    LIB libspdk_blobfs.a
00:07:21.701    CC lib/nvmf/vfio_user.o
00:07:21.701    CC lib/ftl/ftl_nv_cache.o
00:07:21.701    CC lib/nvmf/rdma.o
00:07:21.701    CC lib/ftl/ftl_band.o
00:07:21.701    CC lib/nvmf/auth.o
00:07:21.965    CC lib/ftl/ftl_band_ops.o
00:07:21.965    CC lib/ftl/ftl_writer.o
00:07:21.965    CC lib/ftl/ftl_rq.o
00:07:21.965    CC lib/ftl/ftl_reloc.o
00:07:21.965    CC lib/ftl/ftl_l2p_cache.o
00:07:21.965    CC lib/ftl/ftl_p2l.o
00:07:21.965    CC lib/ftl/ftl_p2l_log.o
00:07:21.965    CC lib/ftl/mngt/ftl_mngt.o
00:07:21.965    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:07:21.965    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:07:21.965    CC lib/ftl/mngt/ftl_mngt_startup.o
00:07:21.965    SO libspdk_blobfs.so.11.0
00:07:21.965    SYMLINK libspdk_blobfs.so
00:07:21.965    CC lib/ftl/mngt/ftl_mngt_md.o
00:07:22.223    LIB libspdk_lvol.a
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_misc.o
00:07:22.223    SO libspdk_lvol.so.11.0
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_band.o
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:07:22.223    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:07:22.223    CC lib/ftl/utils/ftl_conf.o
00:07:22.223    SYMLINK libspdk_lvol.so
00:07:22.223    CC lib/ftl/utils/ftl_md.o
00:07:22.223    CC lib/ftl/utils/ftl_mempool.o
00:07:22.223    CC lib/ftl/utils/ftl_bitmap.o
00:07:22.223    CC lib/ftl/utils/ftl_property.o
00:07:22.484    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:07:22.484    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:07:22.484    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:07:22.484    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:07:22.484    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:07:22.484    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:07:22.484    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:07:22.484    CC lib/ftl/upgrade/ftl_sb_v3.o
00:07:22.743    CC lib/ftl/upgrade/ftl_sb_v5.o
00:07:22.743    CC lib/ftl/nvc/ftl_nvc_dev.o
00:07:22.743    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:07:22.743    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:07:22.743    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:07:22.743    CC lib/ftl/base/ftl_base_dev.o
00:07:22.743    CC lib/ftl/base/ftl_base_bdev.o
00:07:22.743    CC lib/ftl/ftl_trace.o
00:07:23.002    LIB libspdk_nbd.a
00:07:23.002    SO libspdk_nbd.so.7.0
00:07:23.002    SYMLINK libspdk_nbd.so
00:07:23.002    LIB libspdk_scsi.a
00:07:23.002    SO libspdk_scsi.so.9.0
00:07:23.261    SYMLINK libspdk_scsi.so
00:07:23.261    LIB libspdk_ublk.a
00:07:23.261    SO libspdk_ublk.so.3.0
00:07:23.261    SYMLINK libspdk_ublk.so
00:07:23.261    CC lib/vhost/vhost.o
00:07:23.261    CC lib/iscsi/conn.o
00:07:23.261    CC lib/iscsi/init_grp.o
00:07:23.261    CC lib/vhost/vhost_rpc.o
00:07:23.261    CC lib/vhost/vhost_scsi.o
00:07:23.261    CC lib/iscsi/iscsi.o
00:07:23.261    CC lib/iscsi/param.o
00:07:23.261    CC lib/iscsi/portal_grp.o
00:07:23.261    CC lib/vhost/vhost_blk.o
00:07:23.261    CC lib/vhost/rte_vhost_user.o
00:07:23.261    CC lib/iscsi/tgt_node.o
00:07:23.261    CC lib/iscsi/iscsi_subsystem.o
00:07:23.261    CC lib/iscsi/iscsi_rpc.o
00:07:23.261    CC lib/iscsi/task.o
00:07:23.826    LIB libspdk_ftl.a
00:07:23.826    SO libspdk_ftl.so.9.0
00:07:24.390    SYMLINK libspdk_ftl.so
00:07:24.648    LIB libspdk_vhost.a
00:07:24.906    SO libspdk_vhost.so.8.0
00:07:24.906    SYMLINK libspdk_vhost.so
00:07:25.164    LIB libspdk_iscsi.a
00:07:25.164    SO libspdk_iscsi.so.8.0
00:07:25.422    SYMLINK libspdk_iscsi.so
00:07:25.422    LIB libspdk_nvmf.a
00:07:25.422    SO libspdk_nvmf.so.20.0
00:07:25.679    SYMLINK libspdk_nvmf.so
00:07:25.938    CC module/vfu_device/vfu_virtio.o
00:07:25.938    CC module/env_dpdk/env_dpdk_rpc.o
00:07:25.938    CC module/vfu_device/vfu_virtio_blk.o
00:07:25.938    CC module/vfu_device/vfu_virtio_scsi.o
00:07:25.938    CC module/vfu_device/vfu_virtio_rpc.o
00:07:25.938    CC module/vfu_device/vfu_virtio_fs.o
00:07:26.197    CC module/fsdev/aio/fsdev_aio.o
00:07:26.197    CC module/sock/posix/posix.o
00:07:26.197    CC module/accel/ioat/accel_ioat.o
00:07:26.197    CC module/fsdev/aio/fsdev_aio_rpc.o
00:07:26.197    CC module/accel/ioat/accel_ioat_rpc.o
00:07:26.197    CC module/blob/bdev/blob_bdev.o
00:07:26.197    CC module/fsdev/aio/linux_aio_mgr.o
00:07:26.197    CC module/scheduler/dynamic/scheduler_dynamic.o
00:07:26.197    CC module/accel/dsa/accel_dsa.o
00:07:26.197    CC module/accel/dsa/accel_dsa_rpc.o
00:07:26.197    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:07:26.197    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o
00:07:26.197    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o
00:07:26.197    CC module/keyring/linux/keyring.o
00:07:26.197    CC module/keyring/linux/keyring_rpc.o
00:07:26.197    CC module/accel/error/accel_error.o
00:07:26.197    CC module/scheduler/gscheduler/gscheduler.o
00:07:26.197    CC module/accel/error/accel_error_rpc.o
00:07:26.197    CC module/accel/iaa/accel_iaa.o
00:07:26.197    CC module/keyring/file/keyring.o
00:07:26.197    CC module/keyring/file/keyring_rpc.o
00:07:26.197    CC module/accel/iaa/accel_iaa_rpc.o
00:07:26.197    LIB libspdk_env_dpdk_rpc.a
00:07:26.197    SO libspdk_env_dpdk_rpc.so.6.0
00:07:26.197    SYMLINK libspdk_env_dpdk_rpc.so
00:07:26.455    LIB libspdk_scheduler_dpdk_governor.a
00:07:26.455    SO libspdk_scheduler_dpdk_governor.so.4.0
00:07:26.455    LIB libspdk_accel_ioat.a
00:07:26.455    LIB libspdk_scheduler_dynamic.a
00:07:26.455    SO libspdk_scheduler_dynamic.so.4.0
00:07:26.455    LIB libspdk_keyring_linux.a
00:07:26.455    SO libspdk_accel_ioat.so.6.0
00:07:26.455    LIB libspdk_accel_iaa.a
00:07:26.455    LIB libspdk_accel_error.a
00:07:26.455    SYMLINK libspdk_scheduler_dpdk_governor.so
00:07:26.455    LIB libspdk_scheduler_gscheduler.a
00:07:26.455    LIB libspdk_keyring_file.a
00:07:26.455    SO libspdk_keyring_linux.so.1.0
00:07:26.455    SO libspdk_accel_iaa.so.3.0
00:07:26.455    SO libspdk_accel_error.so.2.0
00:07:26.455    SO libspdk_scheduler_gscheduler.so.4.0
00:07:26.455    SO libspdk_keyring_file.so.2.0
00:07:26.455    SYMLINK libspdk_scheduler_dynamic.so
00:07:26.455    SYMLINK libspdk_accel_ioat.so
00:07:26.455    SYMLINK libspdk_scheduler_gscheduler.so
00:07:26.455    SYMLINK libspdk_keyring_linux.so
00:07:26.455    SYMLINK libspdk_accel_error.so
00:07:26.455    SYMLINK libspdk_accel_iaa.so
00:07:26.455    SYMLINK libspdk_keyring_file.so
00:07:26.455    LIB libspdk_blob_bdev.a
00:07:26.455    LIB libspdk_accel_dsa.a
00:07:26.455    SO libspdk_blob_bdev.so.12.0
00:07:26.455    SO libspdk_accel_dsa.so.5.0
00:07:26.712    SYMLINK libspdk_blob_bdev.so
00:07:26.712    SYMLINK libspdk_accel_dsa.so
00:07:26.978    CC module/bdev/delay/vbdev_delay.o
00:07:26.978    CC module/bdev/delay/vbdev_delay_rpc.o
00:07:26.978    CC module/bdev/error/vbdev_error.o
00:07:26.978    CC module/bdev/passthru/vbdev_passthru.o
00:07:26.978    CC module/bdev/error/vbdev_error_rpc.o
00:07:26.978    CC module/bdev/gpt/gpt.o
00:07:26.978    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:07:26.978    CC module/blobfs/bdev/blobfs_bdev.o
00:07:26.978    CC module/bdev/gpt/vbdev_gpt.o
00:07:26.978    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:07:26.978    CC module/bdev/malloc/bdev_malloc.o
00:07:26.978    CC module/bdev/malloc/bdev_malloc_rpc.o
00:07:26.978    CC module/bdev/lvol/vbdev_lvol.o
00:07:26.978    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:07:26.978    CC module/bdev/null/bdev_null.o
00:07:26.978    CC module/bdev/null/bdev_null_rpc.o
00:07:26.978    CC module/bdev/raid/bdev_raid.o
00:07:26.978    CC module/bdev/raid/bdev_raid_rpc.o
00:07:26.978    CC module/bdev/virtio/bdev_virtio_scsi.o
00:07:26.978    CC module/bdev/nvme/bdev_nvme.o
00:07:26.978    CC module/bdev/raid/bdev_raid_sb.o
00:07:26.978    CC module/bdev/virtio/bdev_virtio_blk.o
00:07:26.978    CC module/bdev/raid/raid0.o
00:07:26.978    CC module/bdev/split/vbdev_split.o
00:07:26.978    CC module/bdev/nvme/bdev_nvme_rpc.o
00:07:26.978    CC module/bdev/virtio/bdev_virtio_rpc.o
00:07:26.978    CC module/bdev/nvme/nvme_rpc.o
00:07:26.978    CC module/bdev/raid/raid1.o
00:07:26.978    CC module/bdev/split/vbdev_split_rpc.o
00:07:26.978    CC module/bdev/zone_block/vbdev_zone_block.o
00:07:26.978    CC module/bdev/aio/bdev_aio.o
00:07:26.978    CC module/bdev/raid/concat.o
00:07:26.978    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:07:26.978    CC module/bdev/nvme/bdev_mdns_client.o
00:07:26.978    CC module/bdev/aio/bdev_aio_rpc.o
00:07:26.978    CC module/bdev/nvme/vbdev_opal.o
00:07:26.978    CC module/bdev/iscsi/bdev_iscsi.o
00:07:26.978    CC module/bdev/crypto/vbdev_crypto.o
00:07:26.978    CC module/bdev/nvme/vbdev_opal_rpc.o
00:07:26.978    CC module/bdev/ftl/bdev_ftl.o
00:07:26.978    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:07:26.978    CC module/bdev/crypto/vbdev_crypto_rpc.o
00:07:26.978    CC module/bdev/ftl/bdev_ftl_rpc.o
00:07:26.978    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:07:26.978    LIB libspdk_vfu_device.a
00:07:27.237    SO libspdk_vfu_device.so.3.0
00:07:27.237    LIB libspdk_blobfs_bdev.a
00:07:27.237    SO libspdk_blobfs_bdev.so.6.0
00:07:27.237    SYMLINK libspdk_vfu_device.so
00:07:27.494    LIB libspdk_bdev_split.a
00:07:27.494    LIB libspdk_sock_posix.a
00:07:27.494    LIB libspdk_bdev_error.a
00:07:27.494    LIB libspdk_fsdev_aio.a
00:07:27.494    SO libspdk_bdev_split.so.6.0
00:07:27.494    SYMLINK libspdk_blobfs_bdev.so
00:07:27.494    SO libspdk_sock_posix.so.6.0
00:07:27.494    SO libspdk_bdev_error.so.6.0
00:07:27.494    SO libspdk_fsdev_aio.so.1.0
00:07:27.494    LIB libspdk_bdev_ftl.a
00:07:27.494    SO libspdk_bdev_ftl.so.6.0
00:07:27.494    SYMLINK libspdk_bdev_split.so
00:07:27.494    LIB libspdk_bdev_gpt.a
00:07:27.494    SYMLINK libspdk_bdev_error.so
00:07:27.494    SYMLINK libspdk_sock_posix.so
00:07:27.494    LIB libspdk_bdev_null.a
00:07:27.494    SO libspdk_bdev_gpt.so.6.0
00:07:27.494    SYMLINK libspdk_fsdev_aio.so
00:07:27.494    SO libspdk_bdev_null.so.6.0
00:07:27.495    SYMLINK libspdk_bdev_ftl.so
00:07:27.495    LIB libspdk_bdev_passthru.a
00:07:27.495    SYMLINK libspdk_bdev_gpt.so
00:07:27.495    SO libspdk_bdev_passthru.so.6.0
00:07:27.495    LIB libspdk_bdev_iscsi.a
00:07:27.495    LIB libspdk_bdev_crypto.a
00:07:27.495    LIB libspdk_bdev_zone_block.a
00:07:27.495    SYMLINK libspdk_bdev_null.so
00:07:27.495    LIB libspdk_bdev_aio.a
00:07:27.495    LIB libspdk_bdev_delay.a
00:07:27.495    SO libspdk_bdev_iscsi.so.6.0
00:07:27.495    SO libspdk_bdev_crypto.so.6.0
00:07:27.495    SO libspdk_bdev_zone_block.so.6.0
00:07:27.751    SO libspdk_bdev_aio.so.6.0
00:07:27.751    SO libspdk_bdev_delay.so.6.0
00:07:27.751    SYMLINK libspdk_bdev_passthru.so
00:07:27.751    LIB libspdk_bdev_malloc.a
00:07:27.751    SYMLINK libspdk_bdev_iscsi.so
00:07:27.751    SO libspdk_bdev_malloc.so.6.0
00:07:27.751    SYMLINK libspdk_bdev_crypto.so
00:07:27.751    SYMLINK libspdk_bdev_zone_block.so
00:07:27.751    SYMLINK libspdk_bdev_aio.so
00:07:27.751    SYMLINK libspdk_bdev_delay.so
00:07:27.751    SYMLINK libspdk_bdev_malloc.so
00:07:27.751    LIB libspdk_bdev_virtio.a
00:07:27.751    SO libspdk_bdev_virtio.so.6.0
00:07:27.751    SYMLINK libspdk_bdev_virtio.so
00:07:27.751    LIB libspdk_bdev_lvol.a
00:07:28.008    SO libspdk_bdev_lvol.so.6.0
00:07:28.008    SYMLINK libspdk_bdev_lvol.so
00:07:28.571    LIB libspdk_bdev_raid.a
00:07:28.572    SO libspdk_bdev_raid.so.6.0
00:07:28.572    SYMLINK libspdk_bdev_raid.so
00:07:30.084    LIB libspdk_accel_dpdk_cryptodev.a
00:07:30.346    SO libspdk_accel_dpdk_cryptodev.so.3.0
00:07:30.346    SYMLINK libspdk_accel_dpdk_cryptodev.so
00:07:30.605    LIB libspdk_bdev_nvme.a
00:07:30.605    SO libspdk_bdev_nvme.so.7.1
00:07:30.862    SYMLINK libspdk_bdev_nvme.so
00:07:31.120    CC module/event/subsystems/iobuf/iobuf.o
00:07:31.120    CC module/event/subsystems/keyring/keyring.o
00:07:31.120    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:07:31.120    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:07:31.120    CC module/event/subsystems/sock/sock.o
00:07:31.120    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:07:31.120    CC module/event/subsystems/fsdev/fsdev.o
00:07:31.120    CC module/event/subsystems/scheduler/scheduler.o
00:07:31.120    CC module/event/subsystems/vmd/vmd.o
00:07:31.120    CC module/event/subsystems/vmd/vmd_rpc.o
00:07:31.378    LIB libspdk_event_keyring.a
00:07:31.378    LIB libspdk_event_vhost_blk.a
00:07:31.378    LIB libspdk_event_fsdev.a
00:07:31.378    LIB libspdk_event_scheduler.a
00:07:31.378    LIB libspdk_event_sock.a
00:07:31.378    LIB libspdk_event_vfu_tgt.a
00:07:31.378    LIB libspdk_event_vmd.a
00:07:31.378    SO libspdk_event_keyring.so.1.0
00:07:31.378    SO libspdk_event_fsdev.so.1.0
00:07:31.378    SO libspdk_event_scheduler.so.4.0
00:07:31.378    SO libspdk_event_vhost_blk.so.3.0
00:07:31.378    SO libspdk_event_vfu_tgt.so.3.0
00:07:31.378    SO libspdk_event_sock.so.5.0
00:07:31.378    LIB libspdk_event_iobuf.a
00:07:31.378    SO libspdk_event_vmd.so.6.0
00:07:31.378    SO libspdk_event_iobuf.so.3.0
00:07:31.378    SYMLINK libspdk_event_keyring.so
00:07:31.378    SYMLINK libspdk_event_fsdev.so
00:07:31.378    SYMLINK libspdk_event_vhost_blk.so
00:07:31.378    SYMLINK libspdk_event_vfu_tgt.so
00:07:31.378    SYMLINK libspdk_event_sock.so
00:07:31.378    SYMLINK libspdk_event_scheduler.so
00:07:31.378    SYMLINK libspdk_event_vmd.so
00:07:31.378    SYMLINK libspdk_event_iobuf.so
00:07:31.637    CC module/event/subsystems/accel/accel.o
00:07:31.637    LIB libspdk_event_accel.a
00:07:31.896    SO libspdk_event_accel.so.6.0
00:07:31.896    SYMLINK libspdk_event_accel.so
00:07:31.896    CC module/event/subsystems/bdev/bdev.o
00:07:32.154    LIB libspdk_event_bdev.a
00:07:32.154    SO libspdk_event_bdev.so.6.0
00:07:32.154    SYMLINK libspdk_event_bdev.so
00:07:32.412    CC module/event/subsystems/scsi/scsi.o
00:07:32.412    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:07:32.412    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:07:32.412    CC module/event/subsystems/nbd/nbd.o
00:07:32.412    CC module/event/subsystems/ublk/ublk.o
00:07:32.670    LIB libspdk_event_ublk.a
00:07:32.670    LIB libspdk_event_nbd.a
00:07:32.670    LIB libspdk_event_scsi.a
00:07:32.670    SO libspdk_event_nbd.so.6.0
00:07:32.670    SO libspdk_event_ublk.so.3.0
00:07:32.670    SO libspdk_event_scsi.so.6.0
00:07:32.670    SYMLINK libspdk_event_nbd.so
00:07:32.670    SYMLINK libspdk_event_ublk.so
00:07:32.670    SYMLINK libspdk_event_scsi.so
00:07:32.670    LIB libspdk_event_nvmf.a
00:07:32.670    SO libspdk_event_nvmf.so.6.0
00:07:32.670    SYMLINK libspdk_event_nvmf.so
00:07:32.928    CC module/event/subsystems/iscsi/iscsi.o
00:07:32.928    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:07:32.928    LIB libspdk_event_vhost_scsi.a
00:07:32.928    LIB libspdk_event_iscsi.a
00:07:32.928    SO libspdk_event_vhost_scsi.so.3.0
00:07:32.928    SO libspdk_event_iscsi.so.6.0
00:07:33.187    SYMLINK libspdk_event_vhost_scsi.so
00:07:33.187    SYMLINK libspdk_event_iscsi.so
00:07:33.187    SO libspdk.so.6.0
00:07:33.187    SYMLINK libspdk.so
00:07:33.454    CC app/trace_record/trace_record.o
00:07:33.454    CXX app/trace/trace.o
00:07:33.454    CC app/spdk_nvme_perf/perf.o
00:07:33.454    CC app/spdk_nvme_identify/identify.o
00:07:33.454    TEST_HEADER include/spdk/accel.h
00:07:33.454    CC app/spdk_top/spdk_top.o
00:07:33.454    TEST_HEADER include/spdk/accel_module.h
00:07:33.454    TEST_HEADER include/spdk/assert.h
00:07:33.454    TEST_HEADER include/spdk/barrier.h
00:07:33.454    TEST_HEADER include/spdk/base64.h
00:07:33.454    CC app/spdk_nvme_discover/discovery_aer.o
00:07:33.454    TEST_HEADER include/spdk/bdev.h
00:07:33.454    CC app/spdk_lspci/spdk_lspci.o
00:07:33.454    TEST_HEADER include/spdk/bdev_module.h
00:07:33.454    TEST_HEADER include/spdk/bdev_zone.h
00:07:33.454    CC test/rpc_client/rpc_client_test.o
00:07:33.454    TEST_HEADER include/spdk/bit_array.h
00:07:33.454    TEST_HEADER include/spdk/bit_pool.h
00:07:33.454    TEST_HEADER include/spdk/blob_bdev.h
00:07:33.454    TEST_HEADER include/spdk/blobfs_bdev.h
00:07:33.454    TEST_HEADER include/spdk/blobfs.h
00:07:33.454    TEST_HEADER include/spdk/blob.h
00:07:33.454    TEST_HEADER include/spdk/conf.h
00:07:33.454    TEST_HEADER include/spdk/config.h
00:07:33.454    TEST_HEADER include/spdk/cpuset.h
00:07:33.454    TEST_HEADER include/spdk/crc16.h
00:07:33.454    TEST_HEADER include/spdk/crc64.h
00:07:33.454    TEST_HEADER include/spdk/crc32.h
00:07:33.454    TEST_HEADER include/spdk/dif.h
00:07:33.454    TEST_HEADER include/spdk/dma.h
00:07:33.454    TEST_HEADER include/spdk/endian.h
00:07:33.454    TEST_HEADER include/spdk/env_dpdk.h
00:07:33.454    TEST_HEADER include/spdk/env.h
00:07:33.454    TEST_HEADER include/spdk/event.h
00:07:33.454    TEST_HEADER include/spdk/fd_group.h
00:07:33.454    TEST_HEADER include/spdk/fd.h
00:07:33.454    TEST_HEADER include/spdk/file.h
00:07:33.454    TEST_HEADER include/spdk/fsdev_module.h
00:07:33.454    TEST_HEADER include/spdk/fsdev.h
00:07:33.454    TEST_HEADER include/spdk/ftl.h
00:07:33.454    TEST_HEADER include/spdk/fuse_dispatcher.h
00:07:33.454    TEST_HEADER include/spdk/gpt_spec.h
00:07:33.454    TEST_HEADER include/spdk/hexlify.h
00:07:33.454    TEST_HEADER include/spdk/histogram_data.h
00:07:33.454    TEST_HEADER include/spdk/idxd.h
00:07:33.454    TEST_HEADER include/spdk/idxd_spec.h
00:07:33.454    TEST_HEADER include/spdk/init.h
00:07:33.454    TEST_HEADER include/spdk/ioat.h
00:07:33.454    TEST_HEADER include/spdk/ioat_spec.h
00:07:33.454    TEST_HEADER include/spdk/iscsi_spec.h
00:07:33.454    TEST_HEADER include/spdk/json.h
00:07:33.454    TEST_HEADER include/spdk/jsonrpc.h
00:07:33.454    TEST_HEADER include/spdk/keyring.h
00:07:33.454    TEST_HEADER include/spdk/likely.h
00:07:33.454    TEST_HEADER include/spdk/keyring_module.h
00:07:33.454    TEST_HEADER include/spdk/log.h
00:07:33.454    TEST_HEADER include/spdk/lvol.h
00:07:33.454    TEST_HEADER include/spdk/memory.h
00:07:33.454    TEST_HEADER include/spdk/md5.h
00:07:33.454    TEST_HEADER include/spdk/mmio.h
00:07:33.454    TEST_HEADER include/spdk/nbd.h
00:07:33.454    TEST_HEADER include/spdk/notify.h
00:07:33.454    TEST_HEADER include/spdk/net.h
00:07:33.454    TEST_HEADER include/spdk/nvme.h
00:07:33.454    TEST_HEADER include/spdk/nvme_ocssd.h
00:07:33.454    TEST_HEADER include/spdk/nvme_intel.h
00:07:33.454    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:07:33.454    TEST_HEADER include/spdk/nvme_spec.h
00:07:33.454    TEST_HEADER include/spdk/nvme_zns.h
00:07:33.454    TEST_HEADER include/spdk/nvmf_cmd.h
00:07:33.454    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:07:33.454    TEST_HEADER include/spdk/nvmf_spec.h
00:07:33.454    TEST_HEADER include/spdk/nvmf.h
00:07:33.454    TEST_HEADER include/spdk/nvmf_transport.h
00:07:33.454    TEST_HEADER include/spdk/opal.h
00:07:33.454    TEST_HEADER include/spdk/opal_spec.h
00:07:33.454    TEST_HEADER include/spdk/pipe.h
00:07:33.454    TEST_HEADER include/spdk/pci_ids.h
00:07:33.454    TEST_HEADER include/spdk/queue.h
00:07:33.454    TEST_HEADER include/spdk/reduce.h
00:07:33.454    TEST_HEADER include/spdk/rpc.h
00:07:33.454    TEST_HEADER include/spdk/scsi.h
00:07:33.454    TEST_HEADER include/spdk/scheduler.h
00:07:33.454    TEST_HEADER include/spdk/sock.h
00:07:33.454    TEST_HEADER include/spdk/scsi_spec.h
00:07:33.454    TEST_HEADER include/spdk/string.h
00:07:33.454    TEST_HEADER include/spdk/stdinc.h
00:07:33.454    TEST_HEADER include/spdk/thread.h
00:07:33.454    TEST_HEADER include/spdk/trace.h
00:07:33.454    TEST_HEADER include/spdk/trace_parser.h
00:07:33.454    TEST_HEADER include/spdk/tree.h
00:07:33.454    TEST_HEADER include/spdk/util.h
00:07:33.454    TEST_HEADER include/spdk/ublk.h
00:07:33.454    TEST_HEADER include/spdk/uuid.h
00:07:33.454    TEST_HEADER include/spdk/version.h
00:07:33.454    TEST_HEADER include/spdk/vfio_user_pci.h
00:07:33.454    TEST_HEADER include/spdk/vfio_user_spec.h
00:07:33.454    CC examples/interrupt_tgt/interrupt_tgt.o
00:07:33.454    TEST_HEADER include/spdk/vhost.h
00:07:33.454    TEST_HEADER include/spdk/vmd.h
00:07:33.454    TEST_HEADER include/spdk/xor.h
00:07:33.454    TEST_HEADER include/spdk/zipf.h
00:07:33.454    CXX test/cpp_headers/accel.o
00:07:33.454    CXX test/cpp_headers/accel_module.o
00:07:33.454    CXX test/cpp_headers/assert.o
00:07:33.454    CXX test/cpp_headers/barrier.o
00:07:33.454    CXX test/cpp_headers/base64.o
00:07:33.454    CXX test/cpp_headers/bdev.o
00:07:33.454    CXX test/cpp_headers/bdev_module.o
00:07:33.454    CXX test/cpp_headers/bdev_zone.o
00:07:33.454    CXX test/cpp_headers/bit_array.o
00:07:33.454    CXX test/cpp_headers/bit_pool.o
00:07:33.454    CXX test/cpp_headers/blob_bdev.o
00:07:33.454    CXX test/cpp_headers/blobfs_bdev.o
00:07:33.454    CXX test/cpp_headers/blobfs.o
00:07:33.454    CXX test/cpp_headers/blob.o
00:07:33.454    CXX test/cpp_headers/conf.o
00:07:33.454    CC app/spdk_dd/spdk_dd.o
00:07:33.454    CXX test/cpp_headers/config.o
00:07:33.454    CXX test/cpp_headers/cpuset.o
00:07:33.454    CXX test/cpp_headers/crc16.o
00:07:33.454    CC app/nvmf_tgt/nvmf_main.o
00:07:33.454    CC app/iscsi_tgt/iscsi_tgt.o
00:07:33.454    CC app/spdk_tgt/spdk_tgt.o
00:07:33.454    CXX test/cpp_headers/crc32.o
00:07:33.454    CC test/env/memory/memory_ut.o
00:07:33.454    CC examples/ioat/verify/verify.o
00:07:33.454    CC examples/ioat/perf/perf.o
00:07:33.455    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:07:33.455    CC test/app/histogram_perf/histogram_perf.o
00:07:33.455    CC test/env/vtophys/vtophys.o
00:07:33.455    CC test/thread/poller_perf/poller_perf.o
00:07:33.455    CC test/env/pci/pci_ut.o
00:07:33.455    CC examples/util/zipf/zipf.o
00:07:33.455    CC test/app/jsoncat/jsoncat.o
00:07:33.455    CC test/app/stub/stub.o
00:07:33.455    CC app/fio/nvme/fio_plugin.o
00:07:33.719    CC test/dma/test_dma/test_dma.o
00:07:33.719    CC test/app/bdev_svc/bdev_svc.o
00:07:33.719    CC app/fio/bdev/fio_plugin.o
00:07:33.719    CC test/env/mem_callbacks/mem_callbacks.o
00:07:33.719    LINK spdk_lspci
00:07:33.719    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:07:33.983    LINK rpc_client_test
00:07:33.983    LINK jsoncat
00:07:33.983    LINK spdk_nvme_discover
00:07:33.983    LINK vtophys
00:07:33.983    LINK poller_perf
00:07:33.983    LINK interrupt_tgt
00:07:33.983    LINK histogram_perf
00:07:33.983    CXX test/cpp_headers/crc64.o
00:07:33.983    LINK zipf
00:07:33.983    CXX test/cpp_headers/dif.o
00:07:33.983    LINK env_dpdk_post_init
00:07:33.983    CXX test/cpp_headers/dma.o
00:07:33.983    CXX test/cpp_headers/endian.o
00:07:33.983    LINK nvmf_tgt
00:07:33.983    CXX test/cpp_headers/env_dpdk.o
00:07:33.983    CXX test/cpp_headers/env.o
00:07:33.983    CXX test/cpp_headers/event.o
00:07:33.983    CXX test/cpp_headers/fd_group.o
00:07:33.983    CXX test/cpp_headers/fd.o
00:07:33.983    CXX test/cpp_headers/file.o
00:07:33.983    CXX test/cpp_headers/fsdev.o
00:07:33.983    LINK stub
00:07:33.983    CXX test/cpp_headers/fsdev_module.o
00:07:33.983    CXX test/cpp_headers/ftl.o
00:07:33.983    CXX test/cpp_headers/fuse_dispatcher.o
00:07:33.983    LINK spdk_tgt
00:07:33.983    LINK iscsi_tgt
00:07:33.983    CXX test/cpp_headers/gpt_spec.o
00:07:33.984    LINK bdev_svc
00:07:33.984    CXX test/cpp_headers/hexlify.o
00:07:33.984    LINK spdk_trace_record
00:07:33.984    LINK ioat_perf
00:07:33.984    CXX test/cpp_headers/histogram_data.o
00:07:33.984    LINK verify
00:07:34.246    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:07:34.246    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:07:34.246    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:07:34.246    CXX test/cpp_headers/idxd.o
00:07:34.246    CXX test/cpp_headers/idxd_spec.o
00:07:34.246    LINK spdk_dd
00:07:34.246    CXX test/cpp_headers/init.o
00:07:34.246    CXX test/cpp_headers/ioat.o
00:07:34.246    CXX test/cpp_headers/ioat_spec.o
00:07:34.246    CXX test/cpp_headers/iscsi_spec.o
00:07:34.510    CXX test/cpp_headers/json.o
00:07:34.510    CXX test/cpp_headers/jsonrpc.o
00:07:34.510    CXX test/cpp_headers/keyring.o
00:07:34.510    CXX test/cpp_headers/keyring_module.o
00:07:34.510    CXX test/cpp_headers/likely.o
00:07:34.510    CXX test/cpp_headers/log.o
00:07:34.510    LINK spdk_trace
00:07:34.510    CXX test/cpp_headers/lvol.o
00:07:34.510    CXX test/cpp_headers/md5.o
00:07:34.510    CXX test/cpp_headers/memory.o
00:07:34.510    CXX test/cpp_headers/mmio.o
00:07:34.510    CXX test/cpp_headers/nbd.o
00:07:34.510    CXX test/cpp_headers/net.o
00:07:34.510    CXX test/cpp_headers/notify.o
00:07:34.510    CXX test/cpp_headers/nvme.o
00:07:34.510    CXX test/cpp_headers/nvme_intel.o
00:07:34.510    CXX test/cpp_headers/nvme_ocssd.o
00:07:34.510    CXX test/cpp_headers/nvme_ocssd_spec.o
00:07:34.510    CXX test/cpp_headers/nvme_spec.o
00:07:34.510    CXX test/cpp_headers/nvme_zns.o
00:07:34.510    CXX test/cpp_headers/nvmf_cmd.o
00:07:34.510    CXX test/cpp_headers/nvmf_fc_spec.o
00:07:34.510    CXX test/cpp_headers/nvmf.o
00:07:34.510    CXX test/cpp_headers/nvmf_spec.o
00:07:34.510    LINK pci_ut
00:07:34.773    CC examples/sock/hello_world/hello_sock.o
00:07:34.773    CC test/event/event_perf/event_perf.o
00:07:34.773    CC examples/vmd/lsvmd/lsvmd.o
00:07:34.773    CXX test/cpp_headers/nvmf_transport.o
00:07:34.773    CC examples/idxd/perf/perf.o
00:07:34.773    CC examples/thread/thread/thread_ex.o
00:07:34.773    CXX test/cpp_headers/opal.o
00:07:34.773    CXX test/cpp_headers/opal_spec.o
00:07:34.773    CC test/event/reactor/reactor.o
00:07:34.773    CXX test/cpp_headers/pci_ids.o
00:07:34.773    CXX test/cpp_headers/pipe.o
00:07:34.773    LINK nvme_fuzz
00:07:34.773    LINK test_dma
00:07:34.773    CXX test/cpp_headers/queue.o
00:07:34.773    CXX test/cpp_headers/reduce.o
00:07:34.773    CXX test/cpp_headers/rpc.o
00:07:34.773    CXX test/cpp_headers/scheduler.o
00:07:34.773    CXX test/cpp_headers/scsi.o
00:07:34.773    CXX test/cpp_headers/scsi_spec.o
00:07:34.773    CC test/event/reactor_perf/reactor_perf.o
00:07:34.773    CXX test/cpp_headers/sock.o
00:07:35.039    CXX test/cpp_headers/stdinc.o
00:07:35.039    CC examples/vmd/led/led.o
00:07:35.039    CXX test/cpp_headers/string.o
00:07:35.039    CXX test/cpp_headers/thread.o
00:07:35.039    CXX test/cpp_headers/trace.o
00:07:35.039    LINK spdk_bdev
00:07:35.039    CXX test/cpp_headers/trace_parser.o
00:07:35.039    LINK mem_callbacks
00:07:35.039    CXX test/cpp_headers/tree.o
00:07:35.039    CXX test/cpp_headers/ublk.o
00:07:35.039    CC test/event/app_repeat/app_repeat.o
00:07:35.039    CXX test/cpp_headers/util.o
00:07:35.039    CXX test/cpp_headers/uuid.o
00:07:35.039    LINK lsvmd
00:07:35.039    CXX test/cpp_headers/version.o
00:07:35.039    CXX test/cpp_headers/vfio_user_pci.o
00:07:35.039    LINK spdk_nvme
00:07:35.039    CXX test/cpp_headers/vfio_user_spec.o
00:07:35.039    CXX test/cpp_headers/vhost.o
00:07:35.039    LINK event_perf
00:07:35.039    CXX test/cpp_headers/vmd.o
00:07:35.039    CXX test/cpp_headers/xor.o
00:07:35.039    CXX test/cpp_headers/zipf.o
00:07:35.039    CC test/event/scheduler/scheduler.o
00:07:35.039    CC app/vhost/vhost.o
00:07:35.039    LINK reactor
00:07:35.298    LINK reactor_perf
00:07:35.298    LINK led
00:07:35.298    LINK hello_sock
00:07:35.298    LINK thread
00:07:35.298    LINK app_repeat
00:07:35.298    LINK vhost_fuzz
00:07:35.559    LINK spdk_nvme_perf
00:07:35.559    LINK scheduler
00:07:35.559    LINK vhost
00:07:35.559    CC test/nvme/sgl/sgl.o
00:07:35.559    CC test/nvme/aer/aer.o
00:07:35.559    CC test/nvme/reset/reset.o
00:07:35.559    CC test/nvme/e2edp/nvme_dp.o
00:07:35.559    CC test/nvme/startup/startup.o
00:07:35.559    CC test/nvme/fdp/fdp.o
00:07:35.559    CC test/nvme/cuse/cuse.o
00:07:35.559    CC test/nvme/simple_copy/simple_copy.o
00:07:35.559    CC test/nvme/err_injection/err_injection.o
00:07:35.559    CC test/nvme/reserve/reserve.o
00:07:35.559    CC test/nvme/overhead/overhead.o
00:07:35.559    CC test/nvme/connect_stress/connect_stress.o
00:07:35.559    CC test/nvme/doorbell_aers/doorbell_aers.o
00:07:35.559    CC test/nvme/compliance/nvme_compliance.o
00:07:35.559    CC test/nvme/boot_partition/boot_partition.o
00:07:35.559    CC test/nvme/fused_ordering/fused_ordering.o
00:07:35.559    LINK idxd_perf
00:07:35.559    CC test/accel/dif/dif.o
00:07:35.559    LINK spdk_nvme_identify
00:07:35.559    CC test/blobfs/mkfs/mkfs.o
00:07:35.559    CC test/lvol/esnap/esnap.o
00:07:35.818    LINK spdk_top
00:07:35.818    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:07:35.818    CC examples/nvme/hello_world/hello_world.o
00:07:35.818    CC examples/nvme/abort/abort.o
00:07:35.818    CC examples/nvme/hotplug/hotplug.o
00:07:35.818    CC examples/nvme/arbitration/arbitration.o
00:07:35.818    CC examples/nvme/reconnect/reconnect.o
00:07:35.818    CC examples/nvme/nvme_manage/nvme_manage.o
00:07:35.818    CC examples/nvme/cmb_copy/cmb_copy.o
00:07:35.818    CC examples/accel/perf/accel_perf.o
00:07:35.818    LINK boot_partition
00:07:35.818    CC examples/blob/hello_world/hello_blob.o
00:07:35.818    LINK connect_stress
00:07:35.818    CC examples/fsdev/hello_world/hello_fsdev.o
00:07:35.818    LINK startup
00:07:35.818    LINK reserve
00:07:35.818    CC examples/blob/cli/blobcli.o
00:07:35.818    LINK err_injection
00:07:35.818    LINK fused_ordering
00:07:36.078    LINK reset
00:07:36.078    LINK doorbell_aers
00:07:36.078    LINK mkfs
00:07:36.078    LINK simple_copy
00:07:36.078    LINK aer
00:07:36.078    LINK sgl
00:07:36.078    LINK overhead
00:07:36.078    LINK nvme_dp
00:07:36.078    LINK pmr_persistence
00:07:36.078    LINK nvme_compliance
00:07:36.078    LINK memory_ut
00:07:36.078    LINK cmb_copy
00:07:36.078    LINK hotplug
00:07:36.337    LINK fdp
00:07:36.337    LINK hello_world
00:07:36.337    LINK arbitration
00:07:36.337    LINK hello_blob
00:07:36.337    LINK hello_fsdev
00:07:36.596    LINK reconnect
00:07:36.596    LINK abort
00:07:36.596    LINK nvme_manage
00:07:36.854    LINK accel_perf
00:07:36.854    LINK blobcli
00:07:36.854    LINK dif
00:07:37.112    CC examples/bdev/hello_world/hello_bdev.o
00:07:37.112    CC examples/bdev/bdevperf/bdevperf.o
00:07:37.370    CC test/bdev/bdevio/bdevio.o
00:07:37.370    LINK iscsi_fuzz
00:07:37.370    LINK hello_bdev
00:07:37.628    LINK cuse
00:07:37.887    LINK bdevio
00:07:38.145    LINK bdevperf
00:07:38.711    CC examples/nvmf/nvmf/nvmf.o
00:07:38.970    LINK nvmf
00:07:43.219    LINK esnap
00:07:43.219  
00:07:43.219  real	2m2.591s
00:07:43.219  user	26m13.844s
00:07:43.219  sys	3m24.175s
00:07:43.219   19:06:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:07:43.219   19:06:13 make -- common/autotest_common.sh@10 -- $ set +x
00:07:43.219  ************************************
00:07:43.219  END TEST make
00:07:43.219  ************************************
00:07:43.219   19:06:13  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:07:43.219   19:06:13  -- pm/common@29 -- $ signal_monitor_resources TERM
00:07:43.219   19:06:13  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:07:43.219   19:06:13  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.219   19:06:13  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:07:43.219   19:06:13  -- pm/common@44 -- $ pid=427755
00:07:43.219   19:06:13  -- pm/common@50 -- $ kill -TERM 427755
00:07:43.219   19:06:13  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.219   19:06:13  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:07:43.219   19:06:13  -- pm/common@44 -- $ pid=427757
00:07:43.219   19:06:13  -- pm/common@50 -- $ kill -TERM 427757
00:07:43.219   19:06:13  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.219   19:06:13  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:07:43.219   19:06:13  -- pm/common@44 -- $ pid=427758
00:07:43.219   19:06:13  -- pm/common@50 -- $ kill -TERM 427758
00:07:43.219   19:06:13  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.219   19:06:13  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:07:43.219   19:06:13  -- pm/common@44 -- $ pid=427788
00:07:43.219   19:06:13  -- pm/common@50 -- $ sudo -E kill -TERM 427788
00:07:43.219   19:06:13  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:07:43.219   19:06:13  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:07:43.219    19:06:14  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:43.219     19:06:14  -- common/autotest_common.sh@1711 -- # lcov --version
00:07:43.219     19:06:14  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:43.219    19:06:14  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:43.219    19:06:14  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:43.219    19:06:14  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:43.219    19:06:14  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:43.219    19:06:14  -- scripts/common.sh@336 -- # IFS=.-:
00:07:43.219    19:06:14  -- scripts/common.sh@336 -- # read -ra ver1
00:07:43.219    19:06:14  -- scripts/common.sh@337 -- # IFS=.-:
00:07:43.219    19:06:14  -- scripts/common.sh@337 -- # read -ra ver2
00:07:43.219    19:06:14  -- scripts/common.sh@338 -- # local 'op=<'
00:07:43.219    19:06:14  -- scripts/common.sh@340 -- # ver1_l=2
00:07:43.219    19:06:14  -- scripts/common.sh@341 -- # ver2_l=1
00:07:43.219    19:06:14  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:43.219    19:06:14  -- scripts/common.sh@344 -- # case "$op" in
00:07:43.219    19:06:14  -- scripts/common.sh@345 -- # : 1
00:07:43.219    19:06:14  -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:43.219    19:06:14  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:43.219     19:06:14  -- scripts/common.sh@365 -- # decimal 1
00:07:43.219     19:06:14  -- scripts/common.sh@353 -- # local d=1
00:07:43.219     19:06:14  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:43.219     19:06:14  -- scripts/common.sh@355 -- # echo 1
00:07:43.219    19:06:14  -- scripts/common.sh@365 -- # ver1[v]=1
00:07:43.219     19:06:14  -- scripts/common.sh@366 -- # decimal 2
00:07:43.219     19:06:14  -- scripts/common.sh@353 -- # local d=2
00:07:43.219     19:06:14  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:43.219     19:06:14  -- scripts/common.sh@355 -- # echo 2
00:07:43.219    19:06:14  -- scripts/common.sh@366 -- # ver2[v]=2
00:07:43.219    19:06:14  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:43.219    19:06:14  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:43.219    19:06:14  -- scripts/common.sh@368 -- # return 0
00:07:43.219    19:06:14  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:43.219    19:06:14  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:43.219  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.219  		--rc genhtml_branch_coverage=1
00:07:43.219  		--rc genhtml_function_coverage=1
00:07:43.219  		--rc genhtml_legend=1
00:07:43.219  		--rc geninfo_all_blocks=1
00:07:43.219  		--rc geninfo_unexecuted_blocks=1
00:07:43.219  		
00:07:43.219  		'
00:07:43.219    19:06:14  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:43.219  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.219  		--rc genhtml_branch_coverage=1
00:07:43.219  		--rc genhtml_function_coverage=1
00:07:43.219  		--rc genhtml_legend=1
00:07:43.219  		--rc geninfo_all_blocks=1
00:07:43.220  		--rc geninfo_unexecuted_blocks=1
00:07:43.220  		
00:07:43.220  		'
00:07:43.220    19:06:14  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:43.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.220  		--rc genhtml_branch_coverage=1
00:07:43.220  		--rc genhtml_function_coverage=1
00:07:43.220  		--rc genhtml_legend=1
00:07:43.220  		--rc geninfo_all_blocks=1
00:07:43.220  		--rc geninfo_unexecuted_blocks=1
00:07:43.220  		
00:07:43.220  		'
00:07:43.220    19:06:14  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:43.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.220  		--rc genhtml_branch_coverage=1
00:07:43.220  		--rc genhtml_function_coverage=1
00:07:43.220  		--rc genhtml_legend=1
00:07:43.220  		--rc geninfo_all_blocks=1
00:07:43.220  		--rc geninfo_unexecuted_blocks=1
00:07:43.220  		
00:07:43.220  		'
00:07:43.220   19:06:14  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:07:43.220     19:06:14  -- nvmf/common.sh@7 -- # uname -s
00:07:43.220    19:06:14  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:43.220    19:06:14  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:43.220    19:06:14  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:43.220    19:06:14  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:43.220    19:06:14  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:43.220    19:06:14  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:43.220    19:06:14  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:43.220    19:06:14  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:43.220    19:06:14  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:43.220     19:06:14  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:43.220    19:06:14  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a
00:07:43.220    19:06:14  -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a
00:07:43.220    19:06:14  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:43.220    19:06:14  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:43.220    19:06:14  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:43.220    19:06:14  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:43.220    19:06:14  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:07:43.220     19:06:14  -- scripts/common.sh@15 -- # shopt -s extglob
00:07:43.220     19:06:14  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:43.220     19:06:14  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:43.220     19:06:14  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:43.220      19:06:14  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:43.220      19:06:14  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:43.220      19:06:14  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:43.220      19:06:14  -- paths/export.sh@5 -- # export PATH
00:07:43.220      19:06:14  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:43.220    19:06:14  -- nvmf/common.sh@51 -- # : 0
00:07:43.220    19:06:14  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:43.220    19:06:14  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:43.220    19:06:14  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:43.220    19:06:14  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:43.220    19:06:14  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:43.220    19:06:14  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:43.220  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:43.220    19:06:14  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:43.220    19:06:14  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:43.220    19:06:14  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:43.220   19:06:14  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:07:43.220    19:06:14  -- spdk/autotest.sh@32 -- # uname -s
00:07:43.220   19:06:14  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:07:43.220   19:06:14  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:07:43.220   19:06:14  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:07:43.220   19:06:14  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:07:43.220   19:06:14  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:07:43.220   19:06:14  -- spdk/autotest.sh@44 -- # modprobe nbd
00:07:43.220    19:06:14  -- spdk/autotest.sh@46 -- # type -P udevadm
00:07:43.220   19:06:14  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:07:43.220   19:06:14  -- spdk/autotest.sh@48 -- # udevadm_pid=497183
00:07:43.220   19:06:14  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:07:43.220   19:06:14  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:07:43.220   19:06:14  -- pm/common@17 -- # local monitor
00:07:43.220   19:06:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.220   19:06:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.220   19:06:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.220    19:06:14  -- pm/common@21 -- # date +%s
00:07:43.220   19:06:14  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:43.220    19:06:14  -- pm/common@21 -- # date +%s
00:07:43.220   19:06:14  -- pm/common@25 -- # sleep 1
00:07:43.220    19:06:14  -- pm/common@21 -- # date +%s
00:07:43.220    19:06:14  -- pm/common@21 -- # date +%s
00:07:43.220   19:06:14  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508374
00:07:43.220   19:06:14  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508374
00:07:43.220   19:06:14  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508374
00:07:43.220   19:06:14  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508374
00:07:43.220  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508374_collect-cpu-load.pm.log
00:07:43.220  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508374_collect-vmstat.pm.log
00:07:43.220  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508374_collect-cpu-temp.pm.log
00:07:43.478  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508374_collect-bmc-pm.bmc.pm.log
00:07:44.417   19:06:15  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:07:44.417   19:06:15  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:07:44.417   19:06:15  -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:44.417   19:06:15  -- common/autotest_common.sh@10 -- # set +x
00:07:44.417   19:06:15  -- spdk/autotest.sh@59 -- # create_test_list
00:07:44.417   19:06:15  -- common/autotest_common.sh@752 -- # xtrace_disable
00:07:44.417   19:06:15  -- common/autotest_common.sh@10 -- # set +x
00:07:44.417     19:06:15  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh
00:07:44.417    19:06:15  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:07:44.417   19:06:15  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:07:44.417   19:06:15  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:07:44.417   19:06:15  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:07:44.417   19:06:15  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:07:44.417    19:06:15  -- common/autotest_common.sh@1457 -- # uname
00:07:44.417   19:06:15  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:07:44.417   19:06:15  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:07:44.417    19:06:15  -- common/autotest_common.sh@1477 -- # uname
00:07:44.417   19:06:15  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:07:44.417   19:06:15  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:07:44.417   19:06:15  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:07:44.417  lcov: LCOV version 1.15
00:07:44.417   19:06:15  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info
00:08:02.578  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:08:02.578  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:08:24.513   19:06:52  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:08:24.513   19:06:52  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:24.513   19:06:52  -- common/autotest_common.sh@10 -- # set +x
00:08:24.513   19:06:52  -- spdk/autotest.sh@78 -- # rm -f
00:08:24.513   19:06:52  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:08:24.513  0000:00:04.7 (8086 0e27): Already using the ioatdma driver
00:08:24.513  0000:00:04.6 (8086 0e26): Already using the ioatdma driver
00:08:24.513  0000:00:04.5 (8086 0e25): Already using the ioatdma driver
00:08:24.513  0000:00:04.4 (8086 0e24): Already using the ioatdma driver
00:08:24.513  0000:00:04.3 (8086 0e23): Already using the ioatdma driver
00:08:24.513  0000:00:04.2 (8086 0e22): Already using the ioatdma driver
00:08:24.513  0000:00:04.1 (8086 0e21): Already using the ioatdma driver
00:08:24.513  0000:00:04.0 (8086 0e20): Already using the ioatdma driver
00:08:24.513  0000:0b:00.0 (8086 0a54): Already using the nvme driver
00:08:24.513  0000:80:04.7 (8086 0e27): Already using the ioatdma driver
00:08:24.513  0000:80:04.6 (8086 0e26): Already using the ioatdma driver
00:08:24.513  0000:80:04.5 (8086 0e25): Already using the ioatdma driver
00:08:24.513  0000:80:04.4 (8086 0e24): Already using the ioatdma driver
00:08:24.513  0000:80:04.3 (8086 0e23): Already using the ioatdma driver
00:08:24.513  0000:80:04.2 (8086 0e22): Already using the ioatdma driver
00:08:24.513  0000:80:04.1 (8086 0e21): Already using the ioatdma driver
00:08:24.513  0000:80:04.0 (8086 0e20): Already using the ioatdma driver
00:08:24.513   19:06:53  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:08:24.513   19:06:53  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:08:24.513   19:06:53  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:08:24.513   19:06:53  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:08:24.513   19:06:53  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:08:24.513   19:06:53  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:08:24.513   19:06:53  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:08:24.513   19:06:53  -- common/autotest_common.sh@1669 -- # bdf=0000:0b:00.0
00:08:24.513   19:06:53  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:08:24.513   19:06:53  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:08:24.513   19:06:53  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:08:24.513   19:06:53  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:08:24.513   19:06:53  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:08:24.513   19:06:53  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:08:24.513   19:06:53  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:08:24.513   19:06:53  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:08:24.513   19:06:53  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:08:24.513   19:06:53  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:08:24.513   19:06:53  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:08:24.513  No valid GPT data, bailing
00:08:24.513    19:06:53  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:08:24.513   19:06:53  -- scripts/common.sh@394 -- # pt=
00:08:24.513   19:06:53  -- scripts/common.sh@395 -- # return 1
00:08:24.513   19:06:53  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:08:24.513  1+0 records in
00:08:24.513  1+0 records out
00:08:24.513  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00163161 s, 643 MB/s
00:08:24.513   19:06:53  -- spdk/autotest.sh@105 -- # sync
00:08:24.513   19:06:53  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:08:24.513   19:06:53  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:08:24.513    19:06:53  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:08:25.079    19:06:55  -- spdk/autotest.sh@111 -- # uname -s
00:08:25.079   19:06:55  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:08:25.079   19:06:55  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:08:25.079   19:06:55  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:08:26.454  Hugepages
00:08:26.454  node     hugesize     free /  total
00:08:26.454  node0   1048576kB        0 /      0
00:08:26.454  node0      2048kB        0 /      0
00:08:26.454  node1   1048576kB        0 /      0
00:08:26.454  node1      2048kB        0 /      0
00:08:26.454  
00:08:26.454  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:08:26.454  I/OAT                     0000:00:04.0    8086   0e20   0       ioatdma          -          -
00:08:26.454  I/OAT                     0000:00:04.1    8086   0e21   0       ioatdma          -          -
00:08:26.454  I/OAT                     0000:00:04.2    8086   0e22   0       ioatdma          -          -
00:08:26.454  I/OAT                     0000:00:04.3    8086   0e23   0       ioatdma          -          -
00:08:26.454  I/OAT                     0000:00:04.4    8086   0e24   0       ioatdma          -          -
00:08:26.454  I/OAT                     0000:00:04.5    8086   0e25   0       ioatdma          -          -
00:08:26.454  I/OAT                     0000:00:04.6    8086   0e26   0       ioatdma          -          -
00:08:26.454  I/OAT                     0000:00:04.7    8086   0e27   0       ioatdma          -          -
00:08:26.454  NVMe                      0000:0b:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:08:26.454  I/OAT                     0000:80:04.0    8086   0e20   1       ioatdma          -          -
00:08:26.454  I/OAT                     0000:80:04.1    8086   0e21   1       ioatdma          -          -
00:08:26.454  I/OAT                     0000:80:04.2    8086   0e22   1       ioatdma          -          -
00:08:26.454  I/OAT                     0000:80:04.3    8086   0e23   1       ioatdma          -          -
00:08:26.454  I/OAT                     0000:80:04.4    8086   0e24   1       ioatdma          -          -
00:08:26.454  I/OAT                     0000:80:04.5    8086   0e25   1       ioatdma          -          -
00:08:26.454  I/OAT                     0000:80:04.6    8086   0e26   1       ioatdma          -          -
00:08:26.454  I/OAT                     0000:80:04.7    8086   0e27   1       ioatdma          -          -
00:08:26.454    19:06:57  -- spdk/autotest.sh@117 -- # uname -s
00:08:26.454   19:06:57  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:08:26.454   19:06:57  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:08:26.454   19:06:57  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:08:27.834  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:08:27.834  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:08:27.834  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:08:27.834  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:08:27.834  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:08:27.834  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:08:27.834  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:08:27.834  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:08:27.834  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:08:28.776  0000:0b:00.0 (8086 0a54): nvme -> vfio-pci
00:08:28.776   19:06:59  -- common/autotest_common.sh@1517 -- # sleep 1
00:08:29.715   19:07:00  -- common/autotest_common.sh@1518 -- # bdfs=()
00:08:29.715   19:07:00  -- common/autotest_common.sh@1518 -- # local bdfs
00:08:29.715   19:07:00  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:08:29.715    19:07:00  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:08:29.715    19:07:00  -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:29.715    19:07:00  -- common/autotest_common.sh@1498 -- # local bdfs
00:08:29.715    19:07:00  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:29.715     19:07:00  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:29.715     19:07:00  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:29.972    19:07:00  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:29.972    19:07:00  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0
00:08:29.972   19:07:00  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:08:30.906  Waiting for block devices as requested
00:08:31.163  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:08:31.163  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:08:31.163  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:08:31.423  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:08:31.423  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:08:31.423  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:08:31.423  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:08:31.681  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:08:31.681  0000:0b:00.0 (8086 0a54): vfio-pci -> nvme
00:08:31.938  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:08:31.938  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:08:31.938  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:08:31.938  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:08:32.196  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:08:32.196  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:08:32.196  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:08:32.454  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:08:32.454   19:07:03  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:08:32.454    19:07:03  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0
00:08:32.454     19:07:03  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:08:32.454     19:07:03  -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme
00:08:32.454    19:07:03  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0
00:08:32.454    19:07:03  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]]
00:08:32.454     19:07:03  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0
00:08:32.454    19:07:03  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:08:32.454   19:07:03  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:08:32.454   19:07:03  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:08:32.454    19:07:03  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:08:32.454    19:07:03  -- common/autotest_common.sh@1531 -- # grep oacs
00:08:32.454    19:07:03  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:08:32.454   19:07:03  -- common/autotest_common.sh@1531 -- # oacs=' 0xf'
00:08:32.454   19:07:03  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:08:32.454   19:07:03  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:08:32.454    19:07:03  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:08:32.454    19:07:03  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:08:32.454    19:07:03  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:08:32.454   19:07:03  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:08:32.454   19:07:03  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:08:32.454   19:07:03  -- common/autotest_common.sh@1543 -- # continue
00:08:32.454   19:07:03  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:08:32.454   19:07:03  -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:32.454   19:07:03  -- common/autotest_common.sh@10 -- # set +x
00:08:32.454   19:07:03  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:08:32.454   19:07:03  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:32.454   19:07:03  -- common/autotest_common.sh@10 -- # set +x
00:08:32.454   19:07:03  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:08:33.829  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:08:33.829  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:08:33.829  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:08:33.829  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:08:33.829  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:08:33.829  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:08:33.829  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:08:33.829  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:08:33.829  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:08:34.767  0000:0b:00.0 (8086 0a54): nvme -> vfio-pci
00:08:34.767   19:07:05  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:08:34.767   19:07:05  -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:34.767   19:07:05  -- common/autotest_common.sh@10 -- # set +x
00:08:34.767   19:07:05  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:08:34.767   19:07:05  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:08:34.767    19:07:05  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:08:34.767    19:07:05  -- common/autotest_common.sh@1563 -- # bdfs=()
00:08:34.767    19:07:05  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:08:34.767    19:07:05  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:08:34.767    19:07:05  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:08:34.767     19:07:05  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:08:34.767     19:07:05  -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:34.767     19:07:05  -- common/autotest_common.sh@1498 -- # local bdfs
00:08:34.767     19:07:05  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:34.767      19:07:05  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:34.767      19:07:05  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:35.025     19:07:05  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:35.025     19:07:05  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0
00:08:35.025    19:07:05  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:08:35.025     19:07:05  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device
00:08:35.025    19:07:05  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:08:35.025    19:07:05  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:08:35.025    19:07:05  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:08:35.025    19:07:05  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:08:35.025    19:07:05  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0
00:08:35.025   19:07:05  -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]]
00:08:35.025   19:07:05  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=507877
00:08:35.025   19:07:05  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:08:35.025   19:07:05  -- common/autotest_common.sh@1585 -- # waitforlisten 507877
00:08:35.025   19:07:05  -- common/autotest_common.sh@835 -- # '[' -z 507877 ']'
00:08:35.025   19:07:05  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:35.025   19:07:05  -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:35.025   19:07:05  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:35.025  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:35.026   19:07:05  -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:35.026   19:07:05  -- common/autotest_common.sh@10 -- # set +x
00:08:35.026  [2024-12-06 19:07:05.914910] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:08:35.026  [2024-12-06 19:07:05.915041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507877 ]
00:08:35.283  [2024-12-06 19:07:06.052025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:35.283  [2024-12-06 19:07:06.167939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:36.216   19:07:06  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:36.216   19:07:06  -- common/autotest_common.sh@868 -- # return 0
00:08:36.216   19:07:06  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:08:36.216   19:07:06  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:08:36.216   19:07:06  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0
00:08:39.500  nvme0n1
00:08:39.500   19:07:10  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:08:39.500  [2024-12-06 19:07:10.390267] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18
00:08:39.500  [2024-12-06 19:07:10.390332] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18
00:08:39.500  request:
00:08:39.500  {
00:08:39.500    "nvme_ctrlr_name": "nvme0",
00:08:39.500    "password": "test",
00:08:39.500    "method": "bdev_nvme_opal_revert",
00:08:39.500    "req_id": 1
00:08:39.500  }
00:08:39.500  Got JSON-RPC error response
00:08:39.500  response:
00:08:39.500  {
00:08:39.500    "code": -32603,
00:08:39.500    "message": "Internal error"
00:08:39.500  }
00:08:39.500   19:07:10  -- common/autotest_common.sh@1591 -- # true
00:08:39.500   19:07:10  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:08:39.500   19:07:10  -- common/autotest_common.sh@1595 -- # killprocess 507877
00:08:39.500   19:07:10  -- common/autotest_common.sh@954 -- # '[' -z 507877 ']'
00:08:39.500   19:07:10  -- common/autotest_common.sh@958 -- # kill -0 507877
00:08:39.500    19:07:10  -- common/autotest_common.sh@959 -- # uname
00:08:39.500   19:07:10  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:39.500    19:07:10  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 507877
00:08:39.500   19:07:10  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:39.500   19:07:10  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:39.500   19:07:10  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 507877'
00:08:39.500  killing process with pid 507877
00:08:39.500   19:07:10  -- common/autotest_common.sh@973 -- # kill 507877
00:08:39.500   19:07:10  -- common/autotest_common.sh@978 -- # wait 507877
00:08:42.777   19:07:13  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:08:42.778   19:07:13  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:08:42.778   19:07:13  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:42.778   19:07:13  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:42.778   19:07:13  -- spdk/autotest.sh@149 -- # timing_enter lib
00:08:42.778   19:07:13  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:42.778   19:07:13  -- common/autotest_common.sh@10 -- # set +x
00:08:42.778   19:07:13  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:08:42.778   19:07:13  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:08:42.778   19:07:13  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:42.778   19:07:13  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:42.778   19:07:13  -- common/autotest_common.sh@10 -- # set +x
00:08:42.778  ************************************
00:08:42.778  START TEST env
00:08:42.778  ************************************
00:08:42.778   19:07:13 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:08:43.035  * Looking for test storage...
00:08:43.035  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env
00:08:43.035    19:07:13 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:43.035     19:07:13 env -- common/autotest_common.sh@1711 -- # lcov --version
00:08:43.035     19:07:13 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:43.035    19:07:13 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:43.035    19:07:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:43.035    19:07:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:43.035    19:07:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:43.035    19:07:13 env -- scripts/common.sh@336 -- # IFS=.-:
00:08:43.035    19:07:13 env -- scripts/common.sh@336 -- # read -ra ver1
00:08:43.035    19:07:13 env -- scripts/common.sh@337 -- # IFS=.-:
00:08:43.035    19:07:13 env -- scripts/common.sh@337 -- # read -ra ver2
00:08:43.035    19:07:13 env -- scripts/common.sh@338 -- # local 'op=<'
00:08:43.035    19:07:13 env -- scripts/common.sh@340 -- # ver1_l=2
00:08:43.035    19:07:13 env -- scripts/common.sh@341 -- # ver2_l=1
00:08:43.035    19:07:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:43.035    19:07:13 env -- scripts/common.sh@344 -- # case "$op" in
00:08:43.035    19:07:13 env -- scripts/common.sh@345 -- # : 1
00:08:43.035    19:07:13 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:43.035    19:07:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:43.035     19:07:13 env -- scripts/common.sh@365 -- # decimal 1
00:08:43.035     19:07:13 env -- scripts/common.sh@353 -- # local d=1
00:08:43.035     19:07:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:43.035     19:07:13 env -- scripts/common.sh@355 -- # echo 1
00:08:43.035    19:07:13 env -- scripts/common.sh@365 -- # ver1[v]=1
00:08:43.035     19:07:13 env -- scripts/common.sh@366 -- # decimal 2
00:08:43.035     19:07:13 env -- scripts/common.sh@353 -- # local d=2
00:08:43.035     19:07:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:43.035     19:07:13 env -- scripts/common.sh@355 -- # echo 2
00:08:43.035    19:07:13 env -- scripts/common.sh@366 -- # ver2[v]=2
00:08:43.035    19:07:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:43.035    19:07:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:43.035    19:07:13 env -- scripts/common.sh@368 -- # return 0
00:08:43.035    19:07:13 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:43.035    19:07:13 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:43.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.035  		--rc genhtml_branch_coverage=1
00:08:43.035  		--rc genhtml_function_coverage=1
00:08:43.035  		--rc genhtml_legend=1
00:08:43.035  		--rc geninfo_all_blocks=1
00:08:43.035  		--rc geninfo_unexecuted_blocks=1
00:08:43.035  		
00:08:43.035  		'
00:08:43.035    19:07:13 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:43.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.035  		--rc genhtml_branch_coverage=1
00:08:43.035  		--rc genhtml_function_coverage=1
00:08:43.035  		--rc genhtml_legend=1
00:08:43.035  		--rc geninfo_all_blocks=1
00:08:43.035  		--rc geninfo_unexecuted_blocks=1
00:08:43.035  		
00:08:43.035  		'
00:08:43.035    19:07:13 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:43.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.035  		--rc genhtml_branch_coverage=1
00:08:43.035  		--rc genhtml_function_coverage=1
00:08:43.035  		--rc genhtml_legend=1
00:08:43.035  		--rc geninfo_all_blocks=1
00:08:43.035  		--rc geninfo_unexecuted_blocks=1
00:08:43.035  		
00:08:43.035  		'
00:08:43.035    19:07:13 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:43.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.035  		--rc genhtml_branch_coverage=1
00:08:43.035  		--rc genhtml_function_coverage=1
00:08:43.035  		--rc genhtml_legend=1
00:08:43.035  		--rc geninfo_all_blocks=1
00:08:43.035  		--rc geninfo_unexecuted_blocks=1
00:08:43.035  		
00:08:43.035  		'
00:08:43.035   19:07:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:08:43.035   19:07:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:43.035   19:07:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:43.035   19:07:13 env -- common/autotest_common.sh@10 -- # set +x
00:08:43.035  ************************************
00:08:43.035  START TEST env_memory
00:08:43.035  ************************************
00:08:43.035   19:07:13 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:08:43.035  
00:08:43.035  
00:08:43.035       CUnit - A unit testing framework for C - Version 2.1-3
00:08:43.035       http://cunit.sourceforge.net/
00:08:43.035  
00:08:43.035  
00:08:43.035  Suite: memory
00:08:43.035    Test: alloc and free memory map ...[2024-12-06 19:07:13.929058] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:08:43.035  passed
00:08:43.035    Test: mem map translation ...[2024-12-06 19:07:13.969396] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:08:43.035  [2024-12-06 19:07:13.969452] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:08:43.035  [2024-12-06 19:07:13.969525] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:08:43.035  [2024-12-06 19:07:13.969556] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:08:43.350  passed
00:08:43.350    Test: mem map registration ...[2024-12-06 19:07:14.034365] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:08:43.350  [2024-12-06 19:07:14.034415] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:08:43.350  passed
00:08:43.350    Test: mem map adjacent registrations ...passed
00:08:43.350  
00:08:43.350  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:43.350                suites      1      1    n/a      0        0
00:08:43.350                 tests      4      4      4      0        0
00:08:43.350               asserts    152    152    152      0      n/a
00:08:43.350  
00:08:43.350  Elapsed time =    0.232 seconds
00:08:43.350  
00:08:43.350  real	0m0.254s
00:08:43.350  user	0m0.240s
00:08:43.350  sys	0m0.013s
00:08:43.350   19:07:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:43.350   19:07:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:08:43.350  ************************************
00:08:43.350  END TEST env_memory
00:08:43.350  ************************************
00:08:43.350   19:07:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:08:43.350   19:07:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:43.350   19:07:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:43.350   19:07:14 env -- common/autotest_common.sh@10 -- # set +x
00:08:43.350  ************************************
00:08:43.350  START TEST env_vtophys
00:08:43.350  ************************************
00:08:43.350   19:07:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:08:43.350  EAL: lib.eal log level changed from notice to debug
00:08:43.350  EAL: Detected lcore 0 as core 0 on socket 0
00:08:43.350  EAL: Detected lcore 1 as core 1 on socket 0
00:08:43.350  EAL: Detected lcore 2 as core 2 on socket 0
00:08:43.350  EAL: Detected lcore 3 as core 3 on socket 0
00:08:43.350  EAL: Detected lcore 4 as core 4 on socket 0
00:08:43.350  EAL: Detected lcore 5 as core 5 on socket 0
00:08:43.350  EAL: Detected lcore 6 as core 8 on socket 0
00:08:43.350  EAL: Detected lcore 7 as core 9 on socket 0
00:08:43.350  EAL: Detected lcore 8 as core 10 on socket 0
00:08:43.350  EAL: Detected lcore 9 as core 11 on socket 0
00:08:43.350  EAL: Detected lcore 10 as core 12 on socket 0
00:08:43.350  EAL: Detected lcore 11 as core 13 on socket 0
00:08:43.350  EAL: Detected lcore 12 as core 0 on socket 1
00:08:43.350  EAL: Detected lcore 13 as core 1 on socket 1
00:08:43.350  EAL: Detected lcore 14 as core 2 on socket 1
00:08:43.350  EAL: Detected lcore 15 as core 3 on socket 1
00:08:43.350  EAL: Detected lcore 16 as core 4 on socket 1
00:08:43.350  EAL: Detected lcore 17 as core 5 on socket 1
00:08:43.350  EAL: Detected lcore 18 as core 8 on socket 1
00:08:43.350  EAL: Detected lcore 19 as core 9 on socket 1
00:08:43.350  EAL: Detected lcore 20 as core 10 on socket 1
00:08:43.350  EAL: Detected lcore 21 as core 11 on socket 1
00:08:43.350  EAL: Detected lcore 22 as core 12 on socket 1
00:08:43.350  EAL: Detected lcore 23 as core 13 on socket 1
00:08:43.350  EAL: Detected lcore 24 as core 0 on socket 0
00:08:43.350  EAL: Detected lcore 25 as core 1 on socket 0
00:08:43.350  EAL: Detected lcore 26 as core 2 on socket 0
00:08:43.350  EAL: Detected lcore 27 as core 3 on socket 0
00:08:43.350  EAL: Detected lcore 28 as core 4 on socket 0
00:08:43.350  EAL: Detected lcore 29 as core 5 on socket 0
00:08:43.350  EAL: Detected lcore 30 as core 8 on socket 0
00:08:43.350  EAL: Detected lcore 31 as core 9 on socket 0
00:08:43.350  EAL: Detected lcore 32 as core 10 on socket 0
00:08:43.350  EAL: Detected lcore 33 as core 11 on socket 0
00:08:43.350  EAL: Detected lcore 34 as core 12 on socket 0
00:08:43.350  EAL: Detected lcore 35 as core 13 on socket 0
00:08:43.350  EAL: Detected lcore 36 as core 0 on socket 1
00:08:43.350  EAL: Detected lcore 37 as core 1 on socket 1
00:08:43.350  EAL: Detected lcore 38 as core 2 on socket 1
00:08:43.350  EAL: Detected lcore 39 as core 3 on socket 1
00:08:43.350  EAL: Detected lcore 40 as core 4 on socket 1
00:08:43.350  EAL: Detected lcore 41 as core 5 on socket 1
00:08:43.350  EAL: Detected lcore 42 as core 8 on socket 1
00:08:43.350  EAL: Detected lcore 43 as core 9 on socket 1
00:08:43.350  EAL: Detected lcore 44 as core 10 on socket 1
00:08:43.350  EAL: Detected lcore 45 as core 11 on socket 1
00:08:43.350  EAL: Detected lcore 46 as core 12 on socket 1
00:08:43.350  EAL: Detected lcore 47 as core 13 on socket 1
00:08:43.350  EAL: Maximum logical cores by configuration: 128
00:08:43.350  EAL: Detected CPU lcores: 48
00:08:43.350  EAL: Detected NUMA nodes: 2
00:08:43.350  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:08:43.350  EAL: Detected shared linkage of DPDK
00:08:43.350  EAL: No shared files mode enabled, IPC will be disabled
00:08:43.350  EAL: No shared files mode enabled, IPC is disabled
00:08:43.350  EAL: Bus pci wants IOVA as 'DC'
00:08:43.350  EAL: Bus auxiliary wants IOVA as 'DC'
00:08:43.350  EAL: Bus vdev wants IOVA as 'DC'
00:08:43.350  EAL: Buses did not request a specific IOVA mode.
00:08:43.350  EAL: IOMMU is available, selecting IOVA as VA mode.
00:08:43.350  EAL: Selected IOVA mode 'VA'
00:08:43.350  EAL: Probing VFIO support...
00:08:43.350  EAL: IOMMU type 1 (Type 1) is supported
00:08:43.350  EAL: IOMMU type 7 (sPAPR) is not supported
00:08:43.350  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:08:43.350  EAL: VFIO support initialized
00:08:43.350  EAL: Ask a virtual area of 0x2e000 bytes
00:08:43.350  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:08:43.350  EAL: Setting up physically contiguous memory...
00:08:43.350  EAL: Setting maximum number of open files to 524288
00:08:43.350  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:08:43.350  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:08:43.350  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:08:43.350  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.350  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:08:43.350  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:43.350  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.350  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:08:43.350  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:08:43.350  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.350  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:08:43.350  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:43.350  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.350  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:08:43.350  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:08:43.350  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.350  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:08:43.350  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:43.350  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.350  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:08:43.350  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:08:43.350  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.350  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:08:43.607  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:43.607  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.607  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:08:43.607  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:08:43.607  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:08:43.607  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.607  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:08:43.607  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:43.607  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.607  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:08:43.607  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:08:43.607  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.607  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:08:43.607  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:43.607  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.607  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:08:43.607  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:08:43.607  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.607  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:08:43.607  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:43.607  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.607  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:08:43.607  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:08:43.607  EAL: Ask a virtual area of 0x61000 bytes
00:08:43.607  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:08:43.607  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:43.607  EAL: Ask a virtual area of 0x400000000 bytes
00:08:43.607  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:08:43.607  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:08:43.607  EAL: Hugepages will be freed exactly as allocated.
00:08:43.607  EAL: No shared files mode enabled, IPC is disabled
00:08:43.607  EAL: No shared files mode enabled, IPC is disabled
00:08:43.607  EAL: TSC frequency is ~2700000 KHz
00:08:43.607  EAL: Main lcore 0 is ready (tid=7f8f950bbb40;cpuset=[0])
00:08:43.607  EAL: Trying to obtain current memory policy.
00:08:43.607  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:43.607  EAL: Restoring previous memory policy: 0
00:08:43.607  EAL: request: mp_malloc_sync
00:08:43.607  EAL: No shared files mode enabled, IPC is disabled
00:08:43.607  EAL: Heap on socket 0 was expanded by 2MB
00:08:43.607  EAL: No shared files mode enabled, IPC is disabled
00:08:43.607  EAL: No shared files mode enabled, IPC is disabled
00:08:43.607  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:08:43.607  EAL: Mem event callback 'spdk:(nil)' registered
00:08:43.607  
00:08:43.607  
00:08:43.607       CUnit - A unit testing framework for C - Version 2.1-3
00:08:43.607       http://cunit.sourceforge.net/
00:08:43.607  
00:08:43.607  
00:08:43.607  Suite: components_suite
00:08:43.865    Test: vtophys_malloc_test ...passed
00:08:43.865    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:08:43.865  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:43.865  EAL: Restoring previous memory policy: 4
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was expanded by 4MB
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was shrunk by 4MB
00:08:43.865  EAL: Trying to obtain current memory policy.
00:08:43.865  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:43.865  EAL: Restoring previous memory policy: 4
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was expanded by 6MB
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was shrunk by 6MB
00:08:43.865  EAL: Trying to obtain current memory policy.
00:08:43.865  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:43.865  EAL: Restoring previous memory policy: 4
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was expanded by 10MB
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was shrunk by 10MB
00:08:43.865  EAL: Trying to obtain current memory policy.
00:08:43.865  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:43.865  EAL: Restoring previous memory policy: 4
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was expanded by 18MB
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was shrunk by 18MB
00:08:43.865  EAL: Trying to obtain current memory policy.
00:08:43.865  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:43.865  EAL: Restoring previous memory policy: 4
00:08:43.865  EAL: Calling mem event callback 'spdk:(nil)'
00:08:43.865  EAL: request: mp_malloc_sync
00:08:43.865  EAL: No shared files mode enabled, IPC is disabled
00:08:43.865  EAL: Heap on socket 0 was expanded by 34MB
00:08:44.122  EAL: Calling mem event callback 'spdk:(nil)'
00:08:44.122  EAL: request: mp_malloc_sync
00:08:44.122  EAL: No shared files mode enabled, IPC is disabled
00:08:44.122  EAL: Heap on socket 0 was shrunk by 34MB
00:08:44.122  EAL: Trying to obtain current memory policy.
00:08:44.122  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:44.122  EAL: Restoring previous memory policy: 4
00:08:44.122  EAL: Calling mem event callback 'spdk:(nil)'
00:08:44.122  EAL: request: mp_malloc_sync
00:08:44.122  EAL: No shared files mode enabled, IPC is disabled
00:08:44.122  EAL: Heap on socket 0 was expanded by 66MB
00:08:44.122  EAL: Calling mem event callback 'spdk:(nil)'
00:08:44.122  EAL: request: mp_malloc_sync
00:08:44.122  EAL: No shared files mode enabled, IPC is disabled
00:08:44.122  EAL: Heap on socket 0 was shrunk by 66MB
00:08:44.379  EAL: Trying to obtain current memory policy.
00:08:44.379  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:44.379  EAL: Restoring previous memory policy: 4
00:08:44.379  EAL: Calling mem event callback 'spdk:(nil)'
00:08:44.379  EAL: request: mp_malloc_sync
00:08:44.379  EAL: No shared files mode enabled, IPC is disabled
00:08:44.379  EAL: Heap on socket 0 was expanded by 130MB
00:08:44.379  EAL: Calling mem event callback 'spdk:(nil)'
00:08:44.636  EAL: request: mp_malloc_sync
00:08:44.636  EAL: No shared files mode enabled, IPC is disabled
00:08:44.636  EAL: Heap on socket 0 was shrunk by 130MB
00:08:44.636  EAL: Trying to obtain current memory policy.
00:08:44.636  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:44.636  EAL: Restoring previous memory policy: 4
00:08:44.636  EAL: Calling mem event callback 'spdk:(nil)'
00:08:44.636  EAL: request: mp_malloc_sync
00:08:44.636  EAL: No shared files mode enabled, IPC is disabled
00:08:44.636  EAL: Heap on socket 0 was expanded by 258MB
00:08:45.325  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.325  EAL: request: mp_malloc_sync
00:08:45.325  EAL: No shared files mode enabled, IPC is disabled
00:08:45.325  EAL: Heap on socket 0 was shrunk by 258MB
00:08:45.583  EAL: Trying to obtain current memory policy.
00:08:45.583  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.583  EAL: Restoring previous memory policy: 4
00:08:45.583  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.583  EAL: request: mp_malloc_sync
00:08:45.583  EAL: No shared files mode enabled, IPC is disabled
00:08:45.583  EAL: Heap on socket 0 was expanded by 514MB
00:08:46.515  EAL: Calling mem event callback 'spdk:(nil)'
00:08:46.515  EAL: request: mp_malloc_sync
00:08:46.515  EAL: No shared files mode enabled, IPC is disabled
00:08:46.515  EAL: Heap on socket 0 was shrunk by 514MB
00:08:47.081  EAL: Trying to obtain current memory policy.
00:08:47.081  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:47.364  EAL: Restoring previous memory policy: 4
00:08:47.364  EAL: Calling mem event callback 'spdk:(nil)'
00:08:47.364  EAL: request: mp_malloc_sync
00:08:47.364  EAL: No shared files mode enabled, IPC is disabled
00:08:47.364  EAL: Heap on socket 0 was expanded by 1026MB
00:08:49.266  EAL: Calling mem event callback 'spdk:(nil)'
00:08:49.266  EAL: request: mp_malloc_sync
00:08:49.267  EAL: No shared files mode enabled, IPC is disabled
00:08:49.267  EAL: Heap on socket 0 was shrunk by 1026MB
00:08:50.640  passed
00:08:50.640  
00:08:50.640  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:50.640                suites      1      1    n/a      0        0
00:08:50.640                 tests      2      2      2      0        0
00:08:50.640               asserts    497    497    497      0      n/a
00:08:50.640  
00:08:50.640  Elapsed time =    6.883 seconds
00:08:50.640  EAL: Calling mem event callback 'spdk:(nil)'
00:08:50.640  EAL: request: mp_malloc_sync
00:08:50.640  EAL: No shared files mode enabled, IPC is disabled
00:08:50.640  EAL: Heap on socket 0 was shrunk by 2MB
00:08:50.640  EAL: No shared files mode enabled, IPC is disabled
00:08:50.640  EAL: No shared files mode enabled, IPC is disabled
00:08:50.640  EAL: No shared files mode enabled, IPC is disabled
00:08:50.640  
00:08:50.640  real	0m7.139s
00:08:50.640  user	0m6.097s
00:08:50.640  sys	0m0.990s
00:08:50.640   19:07:21 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:50.640   19:07:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:08:50.640  ************************************
00:08:50.640  END TEST env_vtophys
00:08:50.640  ************************************
00:08:50.640   19:07:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:08:50.640   19:07:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:50.640   19:07:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:50.640   19:07:21 env -- common/autotest_common.sh@10 -- # set +x
00:08:50.640  ************************************
00:08:50.640  START TEST env_pci
00:08:50.640  ************************************
00:08:50.640   19:07:21 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:08:50.640  
00:08:50.640  
00:08:50.640       CUnit - A unit testing framework for C - Version 2.1-3
00:08:50.640       http://cunit.sourceforge.net/
00:08:50.640  
00:08:50.640  
00:08:50.640  Suite: pci
00:08:50.640    Test: pci_hook ...[2024-12-06 19:07:21.410720] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 509724 has claimed it
00:08:50.640  EAL: Cannot find device (10000:00:01.0)
00:08:50.641  EAL: Failed to attach device on primary process
00:08:50.641  passed
00:08:50.641  
00:08:50.641  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:50.641                suites      1      1    n/a      0        0
00:08:50.641                 tests      1      1      1      0        0
00:08:50.641               asserts     25     25     25      0      n/a
00:08:50.641  
00:08:50.641  Elapsed time =    0.043 seconds
00:08:50.641  
00:08:50.641  real	0m0.102s
00:08:50.641  user	0m0.039s
00:08:50.641  sys	0m0.062s
00:08:50.641   19:07:21 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:50.641   19:07:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:08:50.641  ************************************
00:08:50.641  END TEST env_pci
00:08:50.641  ************************************
00:08:50.641   19:07:21 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:08:50.641    19:07:21 env -- env/env.sh@15 -- # uname
00:08:50.641   19:07:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:08:50.641   19:07:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:08:50.641   19:07:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:50.641   19:07:21 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:08:50.641   19:07:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:50.641   19:07:21 env -- common/autotest_common.sh@10 -- # set +x
00:08:50.641  ************************************
00:08:50.641  START TEST env_dpdk_post_init
00:08:50.641  ************************************
00:08:50.641   19:07:21 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:50.641  EAL: Detected CPU lcores: 48
00:08:50.641  EAL: Detected NUMA nodes: 2
00:08:50.641  EAL: Detected shared linkage of DPDK
00:08:50.900  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:50.900  EAL: Selected IOVA mode 'VA'
00:08:50.900  EAL: VFIO support initialized
00:08:50.900  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:50.900  EAL: Using IOMMU type 1 (Type 1)
00:08:50.900  EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0)
00:08:50.900  EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0)
00:08:50.900  EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0)
00:08:50.900  EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0)
00:08:50.900  EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0)
00:08:50.900  EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0)
00:08:50.900  EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0)
00:08:51.158  EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0)
00:08:51.728  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0)
00:08:51.728  EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1)
00:08:51.728  EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1)
00:08:51.728  EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1)
00:08:51.728  EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1)
00:08:51.728  EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1)
00:08:51.728  EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1)
00:08:51.986  EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1)
00:08:51.986  EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1)
00:08:55.265  EAL: Releasing PCI mapped resource for 0000:0b:00.0
00:08:55.265  EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000
00:08:55.265  Starting DPDK initialization...
00:08:55.265  Starting SPDK post initialization...
00:08:55.265  SPDK NVMe probe
00:08:55.265  Attaching to 0000:0b:00.0
00:08:55.265  Attached to 0000:0b:00.0
00:08:55.265  Cleaning up...
00:08:55.265  
00:08:55.265  real	0m4.479s
00:08:55.265  user	0m3.077s
00:08:55.265  sys	0m0.463s
00:08:55.265   19:07:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.265   19:07:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:08:55.265  ************************************
00:08:55.265  END TEST env_dpdk_post_init
00:08:55.265  ************************************
00:08:55.265    19:07:26 env -- env/env.sh@26 -- # uname
00:08:55.265   19:07:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:08:55.265   19:07:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:08:55.265   19:07:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.265   19:07:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.265   19:07:26 env -- common/autotest_common.sh@10 -- # set +x
00:08:55.265  ************************************
00:08:55.265  START TEST env_mem_callbacks
00:08:55.265  ************************************
00:08:55.265   19:07:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:08:55.265  EAL: Detected CPU lcores: 48
00:08:55.265  EAL: Detected NUMA nodes: 2
00:08:55.265  EAL: Detected shared linkage of DPDK
00:08:55.265  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:55.265  EAL: Selected IOVA mode 'VA'
00:08:55.266  EAL: VFIO support initialized
00:08:55.266  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:55.266  
00:08:55.266  
00:08:55.266       CUnit - A unit testing framework for C - Version 2.1-3
00:08:55.266       http://cunit.sourceforge.net/
00:08:55.266  
00:08:55.266  
00:08:55.266  Suite: memory
00:08:55.266    Test: test ...
00:08:55.266  register 0x200000200000 2097152
00:08:55.266  malloc 3145728
00:08:55.266  register 0x200000400000 4194304
00:08:55.266  buf 0x2000004fffc0 len 3145728 PASSED
00:08:55.266  malloc 64
00:08:55.266  buf 0x2000004ffec0 len 64 PASSED
00:08:55.266  malloc 4194304
00:08:55.266  register 0x200000800000 6291456
00:08:55.266  buf 0x2000009fffc0 len 4194304 PASSED
00:08:55.266  free 0x2000004fffc0 3145728
00:08:55.266  free 0x2000004ffec0 64
00:08:55.266  unregister 0x200000400000 4194304 PASSED
00:08:55.266  free 0x2000009fffc0 4194304
00:08:55.266  unregister 0x200000800000 6291456 PASSED
00:08:55.266  malloc 8388608
00:08:55.266  register 0x200000400000 10485760
00:08:55.266  buf 0x2000005fffc0 len 8388608 PASSED
00:08:55.266  free 0x2000005fffc0 8388608
00:08:55.266  unregister 0x200000400000 10485760 PASSED
00:08:55.523  passed
00:08:55.523  
00:08:55.523  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:55.523                suites      1      1    n/a      0        0
00:08:55.523                 tests      1      1      1      0        0
00:08:55.523               asserts     15     15     15      0      n/a
00:08:55.523  
00:08:55.523  Elapsed time =    0.051 seconds
00:08:55.523  
00:08:55.523  real	0m0.172s
00:08:55.523  user	0m0.099s
00:08:55.523  sys	0m0.072s
00:08:55.523   19:07:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.523   19:07:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:08:55.523  ************************************
00:08:55.523  END TEST env_mem_callbacks
00:08:55.523  ************************************
00:08:55.523  
00:08:55.523  real	0m12.549s
00:08:55.523  user	0m9.739s
00:08:55.523  sys	0m1.839s
00:08:55.523   19:07:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.523   19:07:26 env -- common/autotest_common.sh@10 -- # set +x
00:08:55.523  ************************************
00:08:55.523  END TEST env
00:08:55.523  ************************************
00:08:55.523   19:07:26  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:08:55.523   19:07:26  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.523   19:07:26  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.523   19:07:26  -- common/autotest_common.sh@10 -- # set +x
00:08:55.523  ************************************
00:08:55.523  START TEST rpc
00:08:55.523  ************************************
00:08:55.523   19:07:26 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:08:55.523  * Looking for test storage...
00:08:55.523  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:08:55.523    19:07:26 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:55.523     19:07:26 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:08:55.523     19:07:26 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:55.523    19:07:26 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:55.523    19:07:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:55.523    19:07:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:55.523    19:07:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:55.523    19:07:26 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:55.523    19:07:26 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:55.523    19:07:26 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:55.523    19:07:26 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:55.523    19:07:26 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:55.523    19:07:26 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:55.523    19:07:26 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:55.523    19:07:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:55.523    19:07:26 rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:55.523    19:07:26 rpc -- scripts/common.sh@345 -- # : 1
00:08:55.523    19:07:26 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:55.523    19:07:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:55.523     19:07:26 rpc -- scripts/common.sh@365 -- # decimal 1
00:08:55.523     19:07:26 rpc -- scripts/common.sh@353 -- # local d=1
00:08:55.523     19:07:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:55.523     19:07:26 rpc -- scripts/common.sh@355 -- # echo 1
00:08:55.523    19:07:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:55.523     19:07:26 rpc -- scripts/common.sh@366 -- # decimal 2
00:08:55.523     19:07:26 rpc -- scripts/common.sh@353 -- # local d=2
00:08:55.523     19:07:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:55.523     19:07:26 rpc -- scripts/common.sh@355 -- # echo 2
00:08:55.523    19:07:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:55.523    19:07:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:55.523    19:07:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:55.523    19:07:26 rpc -- scripts/common.sh@368 -- # return 0
00:08:55.523    19:07:26 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:55.523    19:07:26 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:55.523  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.523  		--rc genhtml_branch_coverage=1
00:08:55.523  		--rc genhtml_function_coverage=1
00:08:55.523  		--rc genhtml_legend=1
00:08:55.523  		--rc geninfo_all_blocks=1
00:08:55.523  		--rc geninfo_unexecuted_blocks=1
00:08:55.523  		
00:08:55.523  		'
00:08:55.523    19:07:26 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:55.523  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.523  		--rc genhtml_branch_coverage=1
00:08:55.523  		--rc genhtml_function_coverage=1
00:08:55.523  		--rc genhtml_legend=1
00:08:55.523  		--rc geninfo_all_blocks=1
00:08:55.523  		--rc geninfo_unexecuted_blocks=1
00:08:55.524  		
00:08:55.524  		'
00:08:55.524    19:07:26 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:55.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.524  		--rc genhtml_branch_coverage=1
00:08:55.524  		--rc genhtml_function_coverage=1
00:08:55.524  		--rc genhtml_legend=1
00:08:55.524  		--rc geninfo_all_blocks=1
00:08:55.524  		--rc geninfo_unexecuted_blocks=1
00:08:55.524  		
00:08:55.524  		'
00:08:55.524    19:07:26 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:55.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.524  		--rc genhtml_branch_coverage=1
00:08:55.524  		--rc genhtml_function_coverage=1
00:08:55.524  		--rc genhtml_legend=1
00:08:55.524  		--rc geninfo_all_blocks=1
00:08:55.524  		--rc geninfo_unexecuted_blocks=1
00:08:55.524  		
00:08:55.524  		'
00:08:55.524   19:07:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=510511
00:08:55.524   19:07:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:08:55.524   19:07:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:55.524   19:07:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 510511
00:08:55.524   19:07:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 510511 ']'
00:08:55.524   19:07:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:55.524   19:07:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:55.524   19:07:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:55.524  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:55.524   19:07:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:55.524   19:07:26 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:55.781  [2024-12-06 19:07:26.549278] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:08:55.781  [2024-12-06 19:07:26.549420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510511 ]
00:08:55.781  [2024-12-06 19:07:26.678168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:56.039  [2024-12-06 19:07:26.791469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:08:56.039  [2024-12-06 19:07:26.791571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 510511' to capture a snapshot of events at runtime.
00:08:56.039  [2024-12-06 19:07:26.791595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:56.039  [2024-12-06 19:07:26.791613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:56.039  [2024-12-06 19:07:26.791630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid510511 for offline analysis/debug.
00:08:56.039  [2024-12-06 19:07:26.792927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:56.984   19:07:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:56.984   19:07:27 rpc -- common/autotest_common.sh@868 -- # return 0
00:08:56.984   19:07:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:08:56.984   19:07:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:08:56.984   19:07:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:08:56.984   19:07:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:08:56.984   19:07:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:56.984   19:07:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:56.984   19:07:27 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:56.984  ************************************
00:08:56.984  START TEST rpc_integrity
00:08:56.984  ************************************
00:08:56.984   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:56.984    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.984   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:56.984    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:56.984   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:56.984    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.984   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:08:56.984    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.984   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:56.984  {
00:08:56.984  "name": "Malloc0",
00:08:56.984  "aliases": [
00:08:56.984  "26c0386b-c132-4810-9c00-7b2eb06246fc"
00:08:56.984  ],
00:08:56.984  "product_name": "Malloc disk",
00:08:56.984  "block_size": 512,
00:08:56.984  "num_blocks": 16384,
00:08:56.984  "uuid": "26c0386b-c132-4810-9c00-7b2eb06246fc",
00:08:56.984  "assigned_rate_limits": {
00:08:56.984  "rw_ios_per_sec": 0,
00:08:56.984  "rw_mbytes_per_sec": 0,
00:08:56.984  "r_mbytes_per_sec": 0,
00:08:56.984  "w_mbytes_per_sec": 0
00:08:56.984  },
00:08:56.984  "claimed": false,
00:08:56.984  "zoned": false,
00:08:56.984  "supported_io_types": {
00:08:56.984  "read": true,
00:08:56.984  "write": true,
00:08:56.984  "unmap": true,
00:08:56.984  "flush": true,
00:08:56.984  "reset": true,
00:08:56.984  "nvme_admin": false,
00:08:56.984  "nvme_io": false,
00:08:56.984  "nvme_io_md": false,
00:08:56.984  "write_zeroes": true,
00:08:56.984  "zcopy": true,
00:08:56.984  "get_zone_info": false,
00:08:56.984  "zone_management": false,
00:08:56.984  "zone_append": false,
00:08:56.984  "compare": false,
00:08:56.984  "compare_and_write": false,
00:08:56.984  "abort": true,
00:08:56.984  "seek_hole": false,
00:08:56.984  "seek_data": false,
00:08:56.984  "copy": true,
00:08:56.984  "nvme_iov_md": false
00:08:56.984  },
00:08:56.984  "memory_domains": [
00:08:56.984  {
00:08:56.984  "dma_device_id": "system",
00:08:56.984  "dma_device_type": 1
00:08:56.984  },
00:08:56.984  {
00:08:56.984  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:56.984  "dma_device_type": 2
00:08:56.984  }
00:08:56.984  ],
00:08:56.984  "driver_specific": {}
00:08:56.984  }
00:08:56.984  ]'
00:08:56.984    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:56.984   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:56.984   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:08:56.984   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.984   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.984  [2024-12-06 19:07:27.749240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:08:56.984  [2024-12-06 19:07:27.749318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:56.984  [2024-12-06 19:07:27.749361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022b80
00:08:56.984  [2024-12-06 19:07:27.749386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:56.984  [2024-12-06 19:07:27.751867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:56.984  [2024-12-06 19:07:27.751898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:56.984  Passthru0
00:08:56.984   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.984    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.984    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.984   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:56.984  {
00:08:56.984  "name": "Malloc0",
00:08:56.984  "aliases": [
00:08:56.984  "26c0386b-c132-4810-9c00-7b2eb06246fc"
00:08:56.984  ],
00:08:56.984  "product_name": "Malloc disk",
00:08:56.984  "block_size": 512,
00:08:56.984  "num_blocks": 16384,
00:08:56.984  "uuid": "26c0386b-c132-4810-9c00-7b2eb06246fc",
00:08:56.984  "assigned_rate_limits": {
00:08:56.984  "rw_ios_per_sec": 0,
00:08:56.984  "rw_mbytes_per_sec": 0,
00:08:56.984  "r_mbytes_per_sec": 0,
00:08:56.984  "w_mbytes_per_sec": 0
00:08:56.984  },
00:08:56.984  "claimed": true,
00:08:56.984  "claim_type": "exclusive_write",
00:08:56.984  "zoned": false,
00:08:56.984  "supported_io_types": {
00:08:56.984  "read": true,
00:08:56.984  "write": true,
00:08:56.984  "unmap": true,
00:08:56.984  "flush": true,
00:08:56.984  "reset": true,
00:08:56.984  "nvme_admin": false,
00:08:56.984  "nvme_io": false,
00:08:56.984  "nvme_io_md": false,
00:08:56.985  "write_zeroes": true,
00:08:56.985  "zcopy": true,
00:08:56.985  "get_zone_info": false,
00:08:56.985  "zone_management": false,
00:08:56.985  "zone_append": false,
00:08:56.985  "compare": false,
00:08:56.985  "compare_and_write": false,
00:08:56.985  "abort": true,
00:08:56.985  "seek_hole": false,
00:08:56.985  "seek_data": false,
00:08:56.985  "copy": true,
00:08:56.985  "nvme_iov_md": false
00:08:56.985  },
00:08:56.985  "memory_domains": [
00:08:56.985  {
00:08:56.985  "dma_device_id": "system",
00:08:56.985  "dma_device_type": 1
00:08:56.985  },
00:08:56.985  {
00:08:56.985  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:56.985  "dma_device_type": 2
00:08:56.985  }
00:08:56.985  ],
00:08:56.985  "driver_specific": {}
00:08:56.985  },
00:08:56.985  {
00:08:56.985  "name": "Passthru0",
00:08:56.985  "aliases": [
00:08:56.985  "05ac1a37-bcfd-5077-a050-b371e776340d"
00:08:56.985  ],
00:08:56.985  "product_name": "passthru",
00:08:56.985  "block_size": 512,
00:08:56.985  "num_blocks": 16384,
00:08:56.985  "uuid": "05ac1a37-bcfd-5077-a050-b371e776340d",
00:08:56.985  "assigned_rate_limits": {
00:08:56.985  "rw_ios_per_sec": 0,
00:08:56.985  "rw_mbytes_per_sec": 0,
00:08:56.985  "r_mbytes_per_sec": 0,
00:08:56.985  "w_mbytes_per_sec": 0
00:08:56.985  },
00:08:56.985  "claimed": false,
00:08:56.985  "zoned": false,
00:08:56.985  "supported_io_types": {
00:08:56.985  "read": true,
00:08:56.985  "write": true,
00:08:56.985  "unmap": true,
00:08:56.985  "flush": true,
00:08:56.985  "reset": true,
00:08:56.985  "nvme_admin": false,
00:08:56.985  "nvme_io": false,
00:08:56.985  "nvme_io_md": false,
00:08:56.985  "write_zeroes": true,
00:08:56.985  "zcopy": true,
00:08:56.985  "get_zone_info": false,
00:08:56.985  "zone_management": false,
00:08:56.985  "zone_append": false,
00:08:56.985  "compare": false,
00:08:56.985  "compare_and_write": false,
00:08:56.985  "abort": true,
00:08:56.985  "seek_hole": false,
00:08:56.985  "seek_data": false,
00:08:56.985  "copy": true,
00:08:56.985  "nvme_iov_md": false
00:08:56.985  },
00:08:56.985  "memory_domains": [
00:08:56.985  {
00:08:56.985  "dma_device_id": "system",
00:08:56.985  "dma_device_type": 1
00:08:56.985  },
00:08:56.985  {
00:08:56.985  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:56.985  "dma_device_type": 2
00:08:56.985  }
00:08:56.985  ],
00:08:56.985  "driver_specific": {
00:08:56.985  "passthru": {
00:08:56.985  "name": "Passthru0",
00:08:56.985  "base_bdev_name": "Malloc0"
00:08:56.985  }
00:08:56.985  }
00:08:56.985  }
00:08:56.985  ]'
00:08:56.985    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:56.985   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:56.985   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.985   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.985    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:56.985    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.985    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.985    19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:56.985   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:56.985    19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:56.985   19:07:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:56.985  
00:08:56.985  real	0m0.238s
00:08:56.985  user	0m0.139s
00:08:56.985  sys	0m0.020s
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:56.985   19:07:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:56.985  ************************************
00:08:56.985  END TEST rpc_integrity
00:08:56.985  ************************************
00:08:56.985   19:07:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:08:56.985   19:07:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:56.985   19:07:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:56.985   19:07:27 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:56.985  ************************************
00:08:56.985  START TEST rpc_plugins
00:08:56.985  ************************************
00:08:56.985   19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:08:56.985    19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:08:56.985    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:56.985    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.242    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.242   19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:08:57.242    19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:08:57.242    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.242    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.242    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.242   19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:08:57.242  {
00:08:57.242  "name": "Malloc1",
00:08:57.242  "aliases": [
00:08:57.242  "328e9c46-f1d5-47e2-9a02-405d21b13171"
00:08:57.242  ],
00:08:57.242  "product_name": "Malloc disk",
00:08:57.242  "block_size": 4096,
00:08:57.243  "num_blocks": 256,
00:08:57.243  "uuid": "328e9c46-f1d5-47e2-9a02-405d21b13171",
00:08:57.243  "assigned_rate_limits": {
00:08:57.243  "rw_ios_per_sec": 0,
00:08:57.243  "rw_mbytes_per_sec": 0,
00:08:57.243  "r_mbytes_per_sec": 0,
00:08:57.243  "w_mbytes_per_sec": 0
00:08:57.243  },
00:08:57.243  "claimed": false,
00:08:57.243  "zoned": false,
00:08:57.243  "supported_io_types": {
00:08:57.243  "read": true,
00:08:57.243  "write": true,
00:08:57.243  "unmap": true,
00:08:57.243  "flush": true,
00:08:57.243  "reset": true,
00:08:57.243  "nvme_admin": false,
00:08:57.243  "nvme_io": false,
00:08:57.243  "nvme_io_md": false,
00:08:57.243  "write_zeroes": true,
00:08:57.243  "zcopy": true,
00:08:57.243  "get_zone_info": false,
00:08:57.243  "zone_management": false,
00:08:57.243  "zone_append": false,
00:08:57.243  "compare": false,
00:08:57.243  "compare_and_write": false,
00:08:57.243  "abort": true,
00:08:57.243  "seek_hole": false,
00:08:57.243  "seek_data": false,
00:08:57.243  "copy": true,
00:08:57.243  "nvme_iov_md": false
00:08:57.243  },
00:08:57.243  "memory_domains": [
00:08:57.243  {
00:08:57.243  "dma_device_id": "system",
00:08:57.243  "dma_device_type": 1
00:08:57.243  },
00:08:57.243  {
00:08:57.243  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.243  "dma_device_type": 2
00:08:57.243  }
00:08:57.243  ],
00:08:57.243  "driver_specific": {}
00:08:57.243  }
00:08:57.243  ]'
00:08:57.243    19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:08:57.243   19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:08:57.243   19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:08:57.243   19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.243   19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.243   19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.243    19:07:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:08:57.243    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.243    19:07:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.243    19:07:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.243   19:07:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:08:57.243    19:07:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:08:57.243   19:07:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:08:57.243  
00:08:57.243  real	0m0.113s
00:08:57.243  user	0m0.070s
00:08:57.243  sys	0m0.009s
00:08:57.243   19:07:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:57.243   19:07:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.243  ************************************
00:08:57.243  END TEST rpc_plugins
00:08:57.243  ************************************
00:08:57.243   19:07:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:08:57.243   19:07:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:57.243   19:07:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:57.243   19:07:28 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:57.243  ************************************
00:08:57.243  START TEST rpc_trace_cmd_test
00:08:57.243  ************************************
00:08:57.243   19:07:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:08:57.243   19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:08:57.243    19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:08:57.243    19:07:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.243    19:07:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:57.243    19:07:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.243   19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:08:57.243  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid510511",
00:08:57.243  "tpoint_group_mask": "0x8",
00:08:57.243  "iscsi_conn": {
00:08:57.243  "mask": "0x2",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "scsi": {
00:08:57.243  "mask": "0x4",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "bdev": {
00:08:57.243  "mask": "0x8",
00:08:57.243  "tpoint_mask": "0xffffffffffffffff"
00:08:57.243  },
00:08:57.243  "nvmf_rdma": {
00:08:57.243  "mask": "0x10",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "nvmf_tcp": {
00:08:57.243  "mask": "0x20",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "ftl": {
00:08:57.243  "mask": "0x40",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "blobfs": {
00:08:57.243  "mask": "0x80",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "dsa": {
00:08:57.243  "mask": "0x200",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "thread": {
00:08:57.243  "mask": "0x400",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "nvme_pcie": {
00:08:57.243  "mask": "0x800",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "iaa": {
00:08:57.243  "mask": "0x1000",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "nvme_tcp": {
00:08:57.243  "mask": "0x2000",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "bdev_nvme": {
00:08:57.243  "mask": "0x4000",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "sock": {
00:08:57.243  "mask": "0x8000",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "blob": {
00:08:57.243  "mask": "0x10000",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "bdev_raid": {
00:08:57.243  "mask": "0x20000",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  },
00:08:57.243  "scheduler": {
00:08:57.243  "mask": "0x40000",
00:08:57.243  "tpoint_mask": "0x0"
00:08:57.243  }
00:08:57.243  }'
00:08:57.243    19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:08:57.243   19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:08:57.243    19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:08:57.243   19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:08:57.243    19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:08:57.501   19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:08:57.501    19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:08:57.501   19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:08:57.501    19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:08:57.501   19:07:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:08:57.501  
00:08:57.501  real	0m0.185s
00:08:57.501  user	0m0.158s
00:08:57.501  sys	0m0.017s
00:08:57.501   19:07:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:57.501   19:07:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:57.501  ************************************
00:08:57.501  END TEST rpc_trace_cmd_test
00:08:57.501  ************************************
00:08:57.502   19:07:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:08:57.502   19:07:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:08:57.502   19:07:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:08:57.502   19:07:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:57.502   19:07:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:57.502   19:07:28 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:57.502  ************************************
00:08:57.502  START TEST rpc_daemon_integrity
00:08:57.502  ************************************
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:57.502  {
00:08:57.502  "name": "Malloc2",
00:08:57.502  "aliases": [
00:08:57.502  "add5b501-85ed-4ae4-99d8-d15c409d3904"
00:08:57.502  ],
00:08:57.502  "product_name": "Malloc disk",
00:08:57.502  "block_size": 512,
00:08:57.502  "num_blocks": 16384,
00:08:57.502  "uuid": "add5b501-85ed-4ae4-99d8-d15c409d3904",
00:08:57.502  "assigned_rate_limits": {
00:08:57.502  "rw_ios_per_sec": 0,
00:08:57.502  "rw_mbytes_per_sec": 0,
00:08:57.502  "r_mbytes_per_sec": 0,
00:08:57.502  "w_mbytes_per_sec": 0
00:08:57.502  },
00:08:57.502  "claimed": false,
00:08:57.502  "zoned": false,
00:08:57.502  "supported_io_types": {
00:08:57.502  "read": true,
00:08:57.502  "write": true,
00:08:57.502  "unmap": true,
00:08:57.502  "flush": true,
00:08:57.502  "reset": true,
00:08:57.502  "nvme_admin": false,
00:08:57.502  "nvme_io": false,
00:08:57.502  "nvme_io_md": false,
00:08:57.502  "write_zeroes": true,
00:08:57.502  "zcopy": true,
00:08:57.502  "get_zone_info": false,
00:08:57.502  "zone_management": false,
00:08:57.502  "zone_append": false,
00:08:57.502  "compare": false,
00:08:57.502  "compare_and_write": false,
00:08:57.502  "abort": true,
00:08:57.502  "seek_hole": false,
00:08:57.502  "seek_data": false,
00:08:57.502  "copy": true,
00:08:57.502  "nvme_iov_md": false
00:08:57.502  },
00:08:57.502  "memory_domains": [
00:08:57.502  {
00:08:57.502  "dma_device_id": "system",
00:08:57.502  "dma_device_type": 1
00:08:57.502  },
00:08:57.502  {
00:08:57.502  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.502  "dma_device_type": 2
00:08:57.502  }
00:08:57.502  ],
00:08:57.502  "driver_specific": {}
00:08:57.502  }
00:08:57.502  ]'
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.502  [2024-12-06 19:07:28.434681] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:08:57.502  [2024-12-06 19:07:28.434744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:57.502  [2024-12-06 19:07:28.434784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023d80
00:08:57.502  [2024-12-06 19:07:28.434804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:57.502  [2024-12-06 19:07:28.437362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:57.502  [2024-12-06 19:07:28.437396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:57.502  Passthru0
00:08:57.502   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.502    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.759    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.759   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:57.759  {
00:08:57.760  "name": "Malloc2",
00:08:57.760  "aliases": [
00:08:57.760  "add5b501-85ed-4ae4-99d8-d15c409d3904"
00:08:57.760  ],
00:08:57.760  "product_name": "Malloc disk",
00:08:57.760  "block_size": 512,
00:08:57.760  "num_blocks": 16384,
00:08:57.760  "uuid": "add5b501-85ed-4ae4-99d8-d15c409d3904",
00:08:57.760  "assigned_rate_limits": {
00:08:57.760  "rw_ios_per_sec": 0,
00:08:57.760  "rw_mbytes_per_sec": 0,
00:08:57.760  "r_mbytes_per_sec": 0,
00:08:57.760  "w_mbytes_per_sec": 0
00:08:57.760  },
00:08:57.760  "claimed": true,
00:08:57.760  "claim_type": "exclusive_write",
00:08:57.760  "zoned": false,
00:08:57.760  "supported_io_types": {
00:08:57.760  "read": true,
00:08:57.760  "write": true,
00:08:57.760  "unmap": true,
00:08:57.760  "flush": true,
00:08:57.760  "reset": true,
00:08:57.760  "nvme_admin": false,
00:08:57.760  "nvme_io": false,
00:08:57.760  "nvme_io_md": false,
00:08:57.760  "write_zeroes": true,
00:08:57.760  "zcopy": true,
00:08:57.760  "get_zone_info": false,
00:08:57.760  "zone_management": false,
00:08:57.760  "zone_append": false,
00:08:57.760  "compare": false,
00:08:57.760  "compare_and_write": false,
00:08:57.760  "abort": true,
00:08:57.760  "seek_hole": false,
00:08:57.760  "seek_data": false,
00:08:57.760  "copy": true,
00:08:57.760  "nvme_iov_md": false
00:08:57.760  },
00:08:57.760  "memory_domains": [
00:08:57.760  {
00:08:57.760  "dma_device_id": "system",
00:08:57.760  "dma_device_type": 1
00:08:57.760  },
00:08:57.760  {
00:08:57.760  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.760  "dma_device_type": 2
00:08:57.760  }
00:08:57.760  ],
00:08:57.760  "driver_specific": {}
00:08:57.760  },
00:08:57.760  {
00:08:57.760  "name": "Passthru0",
00:08:57.760  "aliases": [
00:08:57.760  "0f6d402b-68a8-570c-be54-4ac8825c6378"
00:08:57.760  ],
00:08:57.760  "product_name": "passthru",
00:08:57.760  "block_size": 512,
00:08:57.760  "num_blocks": 16384,
00:08:57.760  "uuid": "0f6d402b-68a8-570c-be54-4ac8825c6378",
00:08:57.760  "assigned_rate_limits": {
00:08:57.760  "rw_ios_per_sec": 0,
00:08:57.760  "rw_mbytes_per_sec": 0,
00:08:57.760  "r_mbytes_per_sec": 0,
00:08:57.760  "w_mbytes_per_sec": 0
00:08:57.760  },
00:08:57.760  "claimed": false,
00:08:57.760  "zoned": false,
00:08:57.760  "supported_io_types": {
00:08:57.760  "read": true,
00:08:57.760  "write": true,
00:08:57.760  "unmap": true,
00:08:57.760  "flush": true,
00:08:57.760  "reset": true,
00:08:57.760  "nvme_admin": false,
00:08:57.760  "nvme_io": false,
00:08:57.760  "nvme_io_md": false,
00:08:57.760  "write_zeroes": true,
00:08:57.760  "zcopy": true,
00:08:57.760  "get_zone_info": false,
00:08:57.760  "zone_management": false,
00:08:57.760  "zone_append": false,
00:08:57.760  "compare": false,
00:08:57.760  "compare_and_write": false,
00:08:57.760  "abort": true,
00:08:57.760  "seek_hole": false,
00:08:57.760  "seek_data": false,
00:08:57.760  "copy": true,
00:08:57.760  "nvme_iov_md": false
00:08:57.760  },
00:08:57.760  "memory_domains": [
00:08:57.760  {
00:08:57.760  "dma_device_id": "system",
00:08:57.760  "dma_device_type": 1
00:08:57.760  },
00:08:57.760  {
00:08:57.760  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.760  "dma_device_type": 2
00:08:57.760  }
00:08:57.760  ],
00:08:57.760  "driver_specific": {
00:08:57.760  "passthru": {
00:08:57.760  "name": "Passthru0",
00:08:57.760  "base_bdev_name": "Malloc2"
00:08:57.760  }
00:08:57.760  }
00:08:57.760  }
00:08:57.760  ]'
00:08:57.760    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.760    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:57.760    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.760    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.760    19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:57.760    19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:57.760  
00:08:57.760  real	0m0.249s
00:08:57.760  user	0m0.138s
00:08:57.760  sys	0m0.025s
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:57.760   19:07:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.760  ************************************
00:08:57.760  END TEST rpc_daemon_integrity
00:08:57.760  ************************************
00:08:57.760   19:07:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:08:57.760   19:07:28 rpc -- rpc/rpc.sh@84 -- # killprocess 510511
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 510511 ']'
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@958 -- # kill -0 510511
00:08:57.760    19:07:28 rpc -- common/autotest_common.sh@959 -- # uname
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:57.760    19:07:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510511
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510511'
00:08:57.760  killing process with pid 510511
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@973 -- # kill 510511
00:08:57.760   19:07:28 rpc -- common/autotest_common.sh@978 -- # wait 510511
00:09:00.288  
00:09:00.288  real	0m4.378s
00:09:00.288  user	0m4.888s
00:09:00.288  sys	0m0.799s
00:09:00.288   19:07:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:00.288   19:07:30 rpc -- common/autotest_common.sh@10 -- # set +x
00:09:00.288  ************************************
00:09:00.288  END TEST rpc
00:09:00.288  ************************************
00:09:00.288   19:07:30  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:09:00.288   19:07:30  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:00.288   19:07:30  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:00.288   19:07:30  -- common/autotest_common.sh@10 -- # set +x
00:09:00.288  ************************************
00:09:00.288  START TEST skip_rpc
00:09:00.288  ************************************
00:09:00.288   19:07:30 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:09:00.288  * Looking for test storage...
00:09:00.288  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:09:00.288    19:07:30 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:00.288     19:07:30 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:09:00.288     19:07:30 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:00.288    19:07:30 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@345 -- # : 1
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:00.288     19:07:30 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:00.288    19:07:30 skip_rpc -- scripts/common.sh@368 -- # return 0
00:09:00.288    19:07:30 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:00.288    19:07:30 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:00.288  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:00.288  		--rc genhtml_branch_coverage=1
00:09:00.288  		--rc genhtml_function_coverage=1
00:09:00.288  		--rc genhtml_legend=1
00:09:00.288  		--rc geninfo_all_blocks=1
00:09:00.288  		--rc geninfo_unexecuted_blocks=1
00:09:00.288  		
00:09:00.288  		'
00:09:00.288    19:07:30 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:00.288  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:00.288  		--rc genhtml_branch_coverage=1
00:09:00.288  		--rc genhtml_function_coverage=1
00:09:00.288  		--rc genhtml_legend=1
00:09:00.288  		--rc geninfo_all_blocks=1
00:09:00.288  		--rc geninfo_unexecuted_blocks=1
00:09:00.288  		
00:09:00.288  		'
00:09:00.288    19:07:30 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:00.288  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:00.288  		--rc genhtml_branch_coverage=1
00:09:00.289  		--rc genhtml_function_coverage=1
00:09:00.289  		--rc genhtml_legend=1
00:09:00.289  		--rc geninfo_all_blocks=1
00:09:00.289  		--rc geninfo_unexecuted_blocks=1
00:09:00.289  		
00:09:00.289  		'
00:09:00.289    19:07:30 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:00.289  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:00.289  		--rc genhtml_branch_coverage=1
00:09:00.289  		--rc genhtml_function_coverage=1
00:09:00.289  		--rc genhtml_legend=1
00:09:00.289  		--rc geninfo_all_blocks=1
00:09:00.289  		--rc geninfo_unexecuted_blocks=1
00:09:00.289  		
00:09:00.289  		'
00:09:00.289   19:07:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:09:00.289   19:07:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:09:00.289   19:07:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:09:00.289   19:07:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:00.289   19:07:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:00.289   19:07:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:00.289  ************************************
00:09:00.289  START TEST skip_rpc
00:09:00.289  ************************************
00:09:00.289   19:07:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:09:00.289   19:07:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=511224
00:09:00.289   19:07:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:09:00.289   19:07:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:09:00.289   19:07:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:09:00.289  [2024-12-06 19:07:31.004274] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:00.289  [2024-12-06 19:07:31.004405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511224 ]
00:09:00.289  [2024-12-06 19:07:31.133799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:00.547  [2024-12-06 19:07:31.249876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:05.810    19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 511224
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 511224 ']'
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 511224
00:09:05.810    19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:05.810    19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511224
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511224'
00:09:05.810  killing process with pid 511224
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 511224
00:09:05.810   19:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 511224
00:09:07.185  
00:09:07.185  real	0m7.021s
00:09:07.185  user	0m6.569s
00:09:07.185  sys	0m0.454s
00:09:07.185   19:07:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:07.185   19:07:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:07.185  ************************************
00:09:07.185  END TEST skip_rpc
00:09:07.185  ************************************
00:09:07.185   19:07:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:09:07.185   19:07:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:07.185   19:07:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:07.185   19:07:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:07.185  ************************************
00:09:07.185  START TEST skip_rpc_with_json
00:09:07.185  ************************************
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=512053
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 512053
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 512053 ']'
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:07.185  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:07.185   19:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:07.185  [2024-12-06 19:07:38.074201] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:07.185  [2024-12-06 19:07:38.074335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512053 ]
00:09:07.442  [2024-12-06 19:07:38.207308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:07.442  [2024-12-06 19:07:38.326403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:08.376  [2024-12-06 19:07:39.161025] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:09:08.376  request:
00:09:08.376  {
00:09:08.376  "trtype": "tcp",
00:09:08.376  "method": "nvmf_get_transports",
00:09:08.376  "req_id": 1
00:09:08.376  }
00:09:08.376  Got JSON-RPC error response
00:09:08.376  response:
00:09:08.376  {
00:09:08.376  "code": -19,
00:09:08.376  "message": "No such device"
00:09:08.376  }
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:08.376  [2024-12-06 19:07:39.169173] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:08.376   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:08.634   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:08.634   19:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:09:08.635  {
00:09:08.635  "subsystems": [
00:09:08.635  {
00:09:08.635  "subsystem": "fsdev",
00:09:08.635  "config": [
00:09:08.635  {
00:09:08.635  "method": "fsdev_set_opts",
00:09:08.635  "params": {
00:09:08.635  "fsdev_io_pool_size": 65535,
00:09:08.635  "fsdev_io_cache_size": 256
00:09:08.635  }
00:09:08.635  }
00:09:08.635  ]
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "vfio_user_target",
00:09:08.635  "config": null
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "keyring",
00:09:08.635  "config": []
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "iobuf",
00:09:08.635  "config": [
00:09:08.635  {
00:09:08.635  "method": "iobuf_set_options",
00:09:08.635  "params": {
00:09:08.635  "small_pool_count": 8192,
00:09:08.635  "large_pool_count": 1024,
00:09:08.635  "small_bufsize": 8192,
00:09:08.635  "large_bufsize": 135168,
00:09:08.635  "enable_numa": false
00:09:08.635  }
00:09:08.635  }
00:09:08.635  ]
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "sock",
00:09:08.635  "config": [
00:09:08.635  {
00:09:08.635  "method": "sock_set_default_impl",
00:09:08.635  "params": {
00:09:08.635  "impl_name": "posix"
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "sock_impl_set_options",
00:09:08.635  "params": {
00:09:08.635  "impl_name": "ssl",
00:09:08.635  "recv_buf_size": 4096,
00:09:08.635  "send_buf_size": 4096,
00:09:08.635  "enable_recv_pipe": true,
00:09:08.635  "enable_quickack": false,
00:09:08.635  "enable_placement_id": 0,
00:09:08.635  "enable_zerocopy_send_server": true,
00:09:08.635  "enable_zerocopy_send_client": false,
00:09:08.635  "zerocopy_threshold": 0,
00:09:08.635  "tls_version": 0,
00:09:08.635  "enable_ktls": false
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "sock_impl_set_options",
00:09:08.635  "params": {
00:09:08.635  "impl_name": "posix",
00:09:08.635  "recv_buf_size": 2097152,
00:09:08.635  "send_buf_size": 2097152,
00:09:08.635  "enable_recv_pipe": true,
00:09:08.635  "enable_quickack": false,
00:09:08.635  "enable_placement_id": 0,
00:09:08.635  "enable_zerocopy_send_server": true,
00:09:08.635  "enable_zerocopy_send_client": false,
00:09:08.635  "zerocopy_threshold": 0,
00:09:08.635  "tls_version": 0,
00:09:08.635  "enable_ktls": false
00:09:08.635  }
00:09:08.635  }
00:09:08.635  ]
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "vmd",
00:09:08.635  "config": []
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "accel",
00:09:08.635  "config": [
00:09:08.635  {
00:09:08.635  "method": "accel_set_options",
00:09:08.635  "params": {
00:09:08.635  "small_cache_size": 128,
00:09:08.635  "large_cache_size": 16,
00:09:08.635  "task_count": 2048,
00:09:08.635  "sequence_count": 2048,
00:09:08.635  "buf_count": 2048
00:09:08.635  }
00:09:08.635  }
00:09:08.635  ]
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "bdev",
00:09:08.635  "config": [
00:09:08.635  {
00:09:08.635  "method": "bdev_set_options",
00:09:08.635  "params": {
00:09:08.635  "bdev_io_pool_size": 65535,
00:09:08.635  "bdev_io_cache_size": 256,
00:09:08.635  "bdev_auto_examine": true,
00:09:08.635  "iobuf_small_cache_size": 128,
00:09:08.635  "iobuf_large_cache_size": 16
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "bdev_raid_set_options",
00:09:08.635  "params": {
00:09:08.635  "process_window_size_kb": 1024,
00:09:08.635  "process_max_bandwidth_mb_sec": 0
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "bdev_iscsi_set_options",
00:09:08.635  "params": {
00:09:08.635  "timeout_sec": 30
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "bdev_nvme_set_options",
00:09:08.635  "params": {
00:09:08.635  "action_on_timeout": "none",
00:09:08.635  "timeout_us": 0,
00:09:08.635  "timeout_admin_us": 0,
00:09:08.635  "keep_alive_timeout_ms": 10000,
00:09:08.635  "arbitration_burst": 0,
00:09:08.635  "low_priority_weight": 0,
00:09:08.635  "medium_priority_weight": 0,
00:09:08.635  "high_priority_weight": 0,
00:09:08.635  "nvme_adminq_poll_period_us": 10000,
00:09:08.635  "nvme_ioq_poll_period_us": 0,
00:09:08.635  "io_queue_requests": 0,
00:09:08.635  "delay_cmd_submit": true,
00:09:08.635  "transport_retry_count": 4,
00:09:08.635  "bdev_retry_count": 3,
00:09:08.635  "transport_ack_timeout": 0,
00:09:08.635  "ctrlr_loss_timeout_sec": 0,
00:09:08.635  "reconnect_delay_sec": 0,
00:09:08.635  "fast_io_fail_timeout_sec": 0,
00:09:08.635  "disable_auto_failback": false,
00:09:08.635  "generate_uuids": false,
00:09:08.635  "transport_tos": 0,
00:09:08.635  "nvme_error_stat": false,
00:09:08.635  "rdma_srq_size": 0,
00:09:08.635  "io_path_stat": false,
00:09:08.635  "allow_accel_sequence": false,
00:09:08.635  "rdma_max_cq_size": 0,
00:09:08.635  "rdma_cm_event_timeout_ms": 0,
00:09:08.635  "dhchap_digests": [
00:09:08.635  "sha256",
00:09:08.635  "sha384",
00:09:08.635  "sha512"
00:09:08.635  ],
00:09:08.635  "dhchap_dhgroups": [
00:09:08.635  "null",
00:09:08.635  "ffdhe2048",
00:09:08.635  "ffdhe3072",
00:09:08.635  "ffdhe4096",
00:09:08.635  "ffdhe6144",
00:09:08.635  "ffdhe8192"
00:09:08.635  ],
00:09:08.635  "rdma_umr_per_io": false
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "bdev_nvme_set_hotplug",
00:09:08.635  "params": {
00:09:08.635  "period_us": 100000,
00:09:08.635  "enable": false
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "bdev_wait_for_examine"
00:09:08.635  }
00:09:08.635  ]
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "scsi",
00:09:08.635  "config": null
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "scheduler",
00:09:08.635  "config": [
00:09:08.635  {
00:09:08.635  "method": "framework_set_scheduler",
00:09:08.635  "params": {
00:09:08.635  "name": "static"
00:09:08.635  }
00:09:08.635  }
00:09:08.635  ]
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "vhost_scsi",
00:09:08.635  "config": []
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "vhost_blk",
00:09:08.635  "config": []
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "ublk",
00:09:08.635  "config": []
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "nbd",
00:09:08.635  "config": []
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "subsystem": "nvmf",
00:09:08.635  "config": [
00:09:08.635  {
00:09:08.635  "method": "nvmf_set_config",
00:09:08.635  "params": {
00:09:08.635  "discovery_filter": "match_any",
00:09:08.635  "admin_cmd_passthru": {
00:09:08.635  "identify_ctrlr": false
00:09:08.635  },
00:09:08.635  "dhchap_digests": [
00:09:08.635  "sha256",
00:09:08.635  "sha384",
00:09:08.635  "sha512"
00:09:08.635  ],
00:09:08.635  "dhchap_dhgroups": [
00:09:08.635  "null",
00:09:08.635  "ffdhe2048",
00:09:08.635  "ffdhe3072",
00:09:08.635  "ffdhe4096",
00:09:08.635  "ffdhe6144",
00:09:08.635  "ffdhe8192"
00:09:08.635  ]
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "nvmf_set_max_subsystems",
00:09:08.635  "params": {
00:09:08.635  "max_subsystems": 1024
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "nvmf_set_crdt",
00:09:08.635  "params": {
00:09:08.635  "crdt1": 0,
00:09:08.635  "crdt2": 0,
00:09:08.635  "crdt3": 0
00:09:08.635  }
00:09:08.635  },
00:09:08.635  {
00:09:08.635  "method": "nvmf_create_transport",
00:09:08.635  "params": {
00:09:08.635  "trtype": "TCP",
00:09:08.636  "max_queue_depth": 128,
00:09:08.636  "max_io_qpairs_per_ctrlr": 127,
00:09:08.636  "in_capsule_data_size": 4096,
00:09:08.636  "max_io_size": 131072,
00:09:08.636  "io_unit_size": 131072,
00:09:08.636  "max_aq_depth": 128,
00:09:08.636  "num_shared_buffers": 511,
00:09:08.636  "buf_cache_size": 4294967295,
00:09:08.636  "dif_insert_or_strip": false,
00:09:08.636  "zcopy": false,
00:09:08.636  "c2h_success": true,
00:09:08.636  "sock_priority": 0,
00:09:08.636  "abort_timeout_sec": 1,
00:09:08.636  "ack_timeout": 0,
00:09:08.636  "data_wr_pool_size": 0
00:09:08.636  }
00:09:08.636  }
00:09:08.636  ]
00:09:08.636  },
00:09:08.636  {
00:09:08.636  "subsystem": "iscsi",
00:09:08.636  "config": [
00:09:08.636  {
00:09:08.636  "method": "iscsi_set_options",
00:09:08.636  "params": {
00:09:08.636  "node_base": "iqn.2016-06.io.spdk",
00:09:08.636  "max_sessions": 128,
00:09:08.636  "max_connections_per_session": 2,
00:09:08.636  "max_queue_depth": 64,
00:09:08.636  "default_time2wait": 2,
00:09:08.636  "default_time2retain": 20,
00:09:08.636  "first_burst_length": 8192,
00:09:08.636  "immediate_data": true,
00:09:08.636  "allow_duplicated_isid": false,
00:09:08.636  "error_recovery_level": 0,
00:09:08.636  "nop_timeout": 60,
00:09:08.636  "nop_in_interval": 30,
00:09:08.636  "disable_chap": false,
00:09:08.636  "require_chap": false,
00:09:08.636  "mutual_chap": false,
00:09:08.636  "chap_group": 0,
00:09:08.636  "max_large_datain_per_connection": 64,
00:09:08.636  "max_r2t_per_connection": 4,
00:09:08.636  "pdu_pool_size": 36864,
00:09:08.636  "immediate_data_pool_size": 16384,
00:09:08.636  "data_out_pool_size": 2048
00:09:08.636  }
00:09:08.636  }
00:09:08.636  ]
00:09:08.636  }
00:09:08.636  ]
00:09:08.636  }
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 512053
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 512053 ']'
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 512053
00:09:08.636    19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:08.636    19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 512053
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 512053'
00:09:08.636  killing process with pid 512053
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 512053
00:09:08.636   19:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 512053
00:09:10.532   19:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=512463
00:09:10.532   19:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:09:10.532   19:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 512463
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 512463 ']'
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 512463
00:09:15.833    19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:15.833    19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 512463
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 512463'
00:09:15.833  killing process with pid 512463
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 512463
00:09:15.833   19:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 512463
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:09:17.732  
00:09:17.732  real	0m10.479s
00:09:17.732  user	0m10.048s
00:09:17.732  sys	0m1.006s
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:17.732  ************************************
00:09:17.732  END TEST skip_rpc_with_json
00:09:17.732  ************************************
00:09:17.732   19:07:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:09:17.732   19:07:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:17.732   19:07:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:17.732   19:07:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:17.732  ************************************
00:09:17.732  START TEST skip_rpc_with_delay
00:09:17.732  ************************************
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:17.732    19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:17.732    19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:17.732  [2024-12-06 19:07:48.607249] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:17.732   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:17.732  
00:09:17.732  real	0m0.167s
00:09:17.732  user	0m0.090s
00:09:17.732  sys	0m0.076s
00:09:17.733   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:17.733   19:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:09:17.733  ************************************
00:09:17.733  END TEST skip_rpc_with_delay
00:09:17.733  ************************************
00:09:17.991    19:07:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:09:17.991   19:07:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:09:17.991   19:07:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:09:17.991   19:07:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:17.991   19:07:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:17.991   19:07:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:17.991  ************************************
00:09:17.991  START TEST exit_on_failed_rpc_init
00:09:17.991  ************************************
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=513396
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 513396
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 513396 ']'
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:17.991  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:17.991   19:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:09:17.991  [2024-12-06 19:07:48.828471] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:17.991  [2024-12-06 19:07:48.828616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513396 ]
00:09:18.249  [2024-12-06 19:07:48.962740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:18.249  [2024-12-06 19:07:49.078744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:19.196   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:19.196   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:19.197    19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:19.197    19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:09:19.197   19:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:09:19.197  [2024-12-06 19:07:50.012213] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:19.197  [2024-12-06 19:07:50.012343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513575 ]
00:09:19.457  [2024-12-06 19:07:50.151848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:19.457  [2024-12-06 19:07:50.280817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:19.457  [2024-12-06 19:07:50.280987] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:09:19.457  [2024-12-06 19:07:50.281020] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:09:19.457  [2024-12-06 19:07:50.281039] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 513396
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 513396 ']'
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 513396
00:09:19.715    19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:19.715    19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513396
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513396'
00:09:19.715  killing process with pid 513396
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 513396
00:09:19.715   19:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 513396
00:09:22.245  
00:09:22.245  real	0m3.952s
00:09:22.245  user	0m4.377s
00:09:22.245  sys	0m0.785s
00:09:22.245   19:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:22.245   19:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:09:22.245  ************************************
00:09:22.245  END TEST exit_on_failed_rpc_init
00:09:22.245  ************************************
00:09:22.245   19:07:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:09:22.245  
00:09:22.245  real	0m21.966s
00:09:22.245  user	0m21.255s
00:09:22.245  sys	0m2.516s
00:09:22.245   19:07:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:22.245   19:07:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:22.245  ************************************
00:09:22.245  END TEST skip_rpc
00:09:22.245  ************************************
00:09:22.245   19:07:52  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:09:22.245   19:07:52  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:22.245   19:07:52  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:22.245   19:07:52  -- common/autotest_common.sh@10 -- # set +x
00:09:22.245  ************************************
00:09:22.245  START TEST rpc_client
00:09:22.245  ************************************
00:09:22.245   19:07:52 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:09:22.245  * Looking for test storage...
00:09:22.245  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client
00:09:22.245    19:07:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:22.245     19:07:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:09:22.245     19:07:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:22.245    19:07:52 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@345 -- # : 1
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@353 -- # local d=1
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@355 -- # echo 1
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@353 -- # local d=2
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:22.245     19:07:52 rpc_client -- scripts/common.sh@355 -- # echo 2
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:22.245    19:07:52 rpc_client -- scripts/common.sh@368 -- # return 0
00:09:22.245    19:07:52 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:22.245    19:07:52 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:22.245  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.245  		--rc genhtml_branch_coverage=1
00:09:22.245  		--rc genhtml_function_coverage=1
00:09:22.245  		--rc genhtml_legend=1
00:09:22.245  		--rc geninfo_all_blocks=1
00:09:22.245  		--rc geninfo_unexecuted_blocks=1
00:09:22.245  		
00:09:22.245  		'
00:09:22.245    19:07:52 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:22.245  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.245  		--rc genhtml_branch_coverage=1
00:09:22.245  		--rc genhtml_function_coverage=1
00:09:22.245  		--rc genhtml_legend=1
00:09:22.245  		--rc geninfo_all_blocks=1
00:09:22.245  		--rc geninfo_unexecuted_blocks=1
00:09:22.245  		
00:09:22.245  		'
00:09:22.245    19:07:52 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:22.245  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.245  		--rc genhtml_branch_coverage=1
00:09:22.245  		--rc genhtml_function_coverage=1
00:09:22.245  		--rc genhtml_legend=1
00:09:22.245  		--rc geninfo_all_blocks=1
00:09:22.245  		--rc geninfo_unexecuted_blocks=1
00:09:22.245  		
00:09:22.245  		'
00:09:22.245    19:07:52 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:22.245  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.245  		--rc genhtml_branch_coverage=1
00:09:22.245  		--rc genhtml_function_coverage=1
00:09:22.245  		--rc genhtml_legend=1
00:09:22.245  		--rc geninfo_all_blocks=1
00:09:22.245  		--rc geninfo_unexecuted_blocks=1
00:09:22.245  		
00:09:22.245  		'
00:09:22.245   19:07:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:09:22.245  OK
00:09:22.245   19:07:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:09:22.245  
00:09:22.245  real	0m0.194s
00:09:22.245  user	0m0.116s
00:09:22.245  sys	0m0.088s
00:09:22.245   19:07:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:22.245   19:07:52 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:09:22.245  ************************************
00:09:22.245  END TEST rpc_client
00:09:22.245  ************************************
00:09:22.245   19:07:52  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:09:22.245   19:07:52  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:22.245   19:07:52  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:22.245   19:07:52  -- common/autotest_common.sh@10 -- # set +x
00:09:22.245  ************************************
00:09:22.245  START TEST json_config
00:09:22.245  ************************************
00:09:22.245   19:07:52 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:09:22.245    19:07:53 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:22.245     19:07:53 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:09:22.245     19:07:53 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:22.245    19:07:53 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:22.245    19:07:53 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:22.245    19:07:53 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:22.245    19:07:53 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:22.245    19:07:53 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:09:22.245    19:07:53 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:09:22.245    19:07:53 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:09:22.245    19:07:53 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:09:22.245    19:07:53 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:09:22.245    19:07:53 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:09:22.245    19:07:53 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:09:22.245    19:07:53 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:22.245    19:07:53 json_config -- scripts/common.sh@344 -- # case "$op" in
00:09:22.246    19:07:53 json_config -- scripts/common.sh@345 -- # : 1
00:09:22.246    19:07:53 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:22.246    19:07:53 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:22.246     19:07:53 json_config -- scripts/common.sh@365 -- # decimal 1
00:09:22.246     19:07:53 json_config -- scripts/common.sh@353 -- # local d=1
00:09:22.246     19:07:53 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:22.246     19:07:53 json_config -- scripts/common.sh@355 -- # echo 1
00:09:22.246    19:07:53 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:09:22.246     19:07:53 json_config -- scripts/common.sh@366 -- # decimal 2
00:09:22.246     19:07:53 json_config -- scripts/common.sh@353 -- # local d=2
00:09:22.246     19:07:53 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:22.246     19:07:53 json_config -- scripts/common.sh@355 -- # echo 2
00:09:22.246    19:07:53 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:09:22.246    19:07:53 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:22.246    19:07:53 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:22.246    19:07:53 json_config -- scripts/common.sh@368 -- # return 0
00:09:22.246    19:07:53 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:22.246    19:07:53 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:22.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.246  		--rc genhtml_branch_coverage=1
00:09:22.246  		--rc genhtml_function_coverage=1
00:09:22.246  		--rc genhtml_legend=1
00:09:22.246  		--rc geninfo_all_blocks=1
00:09:22.246  		--rc geninfo_unexecuted_blocks=1
00:09:22.246  		
00:09:22.246  		'
00:09:22.246    19:07:53 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:22.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.246  		--rc genhtml_branch_coverage=1
00:09:22.246  		--rc genhtml_function_coverage=1
00:09:22.246  		--rc genhtml_legend=1
00:09:22.246  		--rc geninfo_all_blocks=1
00:09:22.246  		--rc geninfo_unexecuted_blocks=1
00:09:22.246  		
00:09:22.246  		'
00:09:22.246    19:07:53 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:22.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.246  		--rc genhtml_branch_coverage=1
00:09:22.246  		--rc genhtml_function_coverage=1
00:09:22.246  		--rc genhtml_legend=1
00:09:22.246  		--rc geninfo_all_blocks=1
00:09:22.246  		--rc geninfo_unexecuted_blocks=1
00:09:22.246  		
00:09:22.246  		'
00:09:22.246    19:07:53 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:22.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.246  		--rc genhtml_branch_coverage=1
00:09:22.246  		--rc genhtml_function_coverage=1
00:09:22.246  		--rc genhtml_legend=1
00:09:22.246  		--rc geninfo_all_blocks=1
00:09:22.246  		--rc geninfo_unexecuted_blocks=1
00:09:22.246  		
00:09:22.246  		'
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:09:22.246     19:07:53 json_config -- nvmf/common.sh@7 -- # uname -s
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:22.246     19:07:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:09:22.246     19:07:53 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:09:22.246     19:07:53 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:22.246     19:07:53 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:22.246     19:07:53 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:22.246      19:07:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.246      19:07:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.246      19:07:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.246      19:07:53 json_config -- paths/export.sh@5 -- # export PATH
00:09:22.246      19:07:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@51 -- # : 0
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:22.246  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:22.246    19:07:53 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:09:22.246  WARNING: No tests are enabled so not running JSON configuration tests
00:09:22.246   19:07:53 json_config -- json_config/json_config.sh@28 -- # exit 0
00:09:22.246  
00:09:22.246  real	0m0.139s
00:09:22.246  user	0m0.104s
00:09:22.246  sys	0m0.038s
00:09:22.246   19:07:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:22.246   19:07:53 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:22.246  ************************************
00:09:22.246  END TEST json_config
00:09:22.246  ************************************
00:09:22.246   19:07:53  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:09:22.246   19:07:53  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:22.246   19:07:53  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:22.246   19:07:53  -- common/autotest_common.sh@10 -- # set +x
00:09:22.246  ************************************
00:09:22.246  START TEST json_config_extra_key
00:09:22.246  ************************************
00:09:22.246   19:07:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:09:22.505    19:07:53 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:22.505     19:07:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:09:22.505     19:07:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:22.505    19:07:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:22.505     19:07:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:22.505    19:07:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:09:22.505    19:07:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:22.505    19:07:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:22.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.505  		--rc genhtml_branch_coverage=1
00:09:22.505  		--rc genhtml_function_coverage=1
00:09:22.505  		--rc genhtml_legend=1
00:09:22.505  		--rc geninfo_all_blocks=1
00:09:22.505  		--rc geninfo_unexecuted_blocks=1
00:09:22.505  		
00:09:22.505  		'
00:09:22.505    19:07:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:22.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.505  		--rc genhtml_branch_coverage=1
00:09:22.505  		--rc genhtml_function_coverage=1
00:09:22.505  		--rc genhtml_legend=1
00:09:22.505  		--rc geninfo_all_blocks=1
00:09:22.505  		--rc geninfo_unexecuted_blocks=1
00:09:22.505  		
00:09:22.505  		'
00:09:22.505    19:07:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:22.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.505  		--rc genhtml_branch_coverage=1
00:09:22.505  		--rc genhtml_function_coverage=1
00:09:22.505  		--rc genhtml_legend=1
00:09:22.505  		--rc geninfo_all_blocks=1
00:09:22.505  		--rc geninfo_unexecuted_blocks=1
00:09:22.505  		
00:09:22.505  		'
00:09:22.505    19:07:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:22.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.505  		--rc genhtml_branch_coverage=1
00:09:22.505  		--rc genhtml_function_coverage=1
00:09:22.505  		--rc genhtml_legend=1
00:09:22.505  		--rc geninfo_all_blocks=1
00:09:22.505  		--rc geninfo_unexecuted_blocks=1
00:09:22.505  		
00:09:22.505  		'
00:09:22.505   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:09:22.505     19:07:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:22.505     19:07:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:22.505    19:07:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:09:22.506     19:07:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:09:22.506     19:07:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:22.506     19:07:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:22.506     19:07:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:22.506      19:07:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.506      19:07:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.506      19:07:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.506      19:07:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:09:22.506      19:07:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:22.506  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:22.506    19:07:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json')
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:09:22.506  INFO: launching applications...
00:09:22.506   19:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=514149
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:22.506  Waiting for target to run...
00:09:22.506   19:07:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 514149 /var/tmp/spdk_tgt.sock
00:09:22.506   19:07:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 514149 ']'
00:09:22.506   19:07:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:22.506   19:07:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:22.506   19:07:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:22.506  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:22.506   19:07:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:22.506   19:07:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:22.506  [2024-12-06 19:07:53.432296] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:22.506  [2024-12-06 19:07:53.432458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514149 ]
00:09:23.439  [2024-12-06 19:07:54.074800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:23.439  [2024-12-06 19:07:54.181639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:24.004   19:07:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:24.004   19:07:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:09:24.004   19:07:54 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:09:24.004  
00:09:24.004   19:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:09:24.005  INFO: shutting down applications...
00:09:24.005   19:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 514149 ]]
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 514149
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 514149
00:09:24.005   19:07:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:24.569   19:07:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:24.569   19:07:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:24.569   19:07:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 514149
00:09:24.569   19:07:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:25.134   19:07:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:25.134   19:07:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:25.134   19:07:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 514149
00:09:25.134   19:07:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:25.698   19:07:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:25.698   19:07:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:25.698   19:07:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 514149
00:09:25.698   19:07:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:25.957   19:07:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:25.957   19:07:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:25.957   19:07:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 514149
00:09:25.957   19:07:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:26.523   19:07:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:26.523   19:07:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:26.523   19:07:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 514149
00:09:26.523   19:07:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:26.523   19:07:57 json_config_extra_key -- json_config/common.sh@43 -- # break
00:09:26.523   19:07:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:26.523   19:07:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:26.523  SPDK target shutdown done
00:09:26.523   19:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:09:26.523  Success
00:09:26.523  
00:09:26.523  real	0m4.170s
00:09:26.523  user	0m3.655s
00:09:26.523  sys	0m0.896s
00:09:26.523   19:07:57 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:26.523   19:07:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:26.523  ************************************
00:09:26.523  END TEST json_config_extra_key
00:09:26.523  ************************************
00:09:26.523   19:07:57  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:26.523   19:07:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:26.523   19:07:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:26.523   19:07:57  -- common/autotest_common.sh@10 -- # set +x
00:09:26.523  ************************************
00:09:26.523  START TEST alias_rpc
00:09:26.523  ************************************
00:09:26.523   19:07:57 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:26.523  * Looking for test storage...
00:09:26.523  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc
00:09:26.523    19:07:57 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:26.523     19:07:57 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:09:26.523     19:07:57 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:26.782    19:07:57 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@345 -- # : 1
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:26.782     19:07:57 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:26.782    19:07:57 alias_rpc -- scripts/common.sh@368 -- # return 0
00:09:26.782    19:07:57 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:26.782    19:07:57 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:26.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.782  		--rc genhtml_branch_coverage=1
00:09:26.782  		--rc genhtml_function_coverage=1
00:09:26.782  		--rc genhtml_legend=1
00:09:26.782  		--rc geninfo_all_blocks=1
00:09:26.782  		--rc geninfo_unexecuted_blocks=1
00:09:26.782  		
00:09:26.782  		'
00:09:26.782    19:07:57 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:26.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.782  		--rc genhtml_branch_coverage=1
00:09:26.782  		--rc genhtml_function_coverage=1
00:09:26.782  		--rc genhtml_legend=1
00:09:26.782  		--rc geninfo_all_blocks=1
00:09:26.782  		--rc geninfo_unexecuted_blocks=1
00:09:26.782  		
00:09:26.782  		'
00:09:26.782    19:07:57 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:26.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.782  		--rc genhtml_branch_coverage=1
00:09:26.782  		--rc genhtml_function_coverage=1
00:09:26.782  		--rc genhtml_legend=1
00:09:26.782  		--rc geninfo_all_blocks=1
00:09:26.782  		--rc geninfo_unexecuted_blocks=1
00:09:26.782  		
00:09:26.782  		'
00:09:26.782    19:07:57 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:26.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.782  		--rc genhtml_branch_coverage=1
00:09:26.782  		--rc genhtml_function_coverage=1
00:09:26.782  		--rc genhtml_legend=1
00:09:26.782  		--rc geninfo_all_blocks=1
00:09:26.782  		--rc geninfo_unexecuted_blocks=1
00:09:26.782  		
00:09:26.782  		'
00:09:26.782   19:07:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:26.782   19:07:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=514618
00:09:26.782   19:07:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:26.782   19:07:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 514618
00:09:26.782   19:07:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 514618 ']'
00:09:26.782   19:07:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:26.782   19:07:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:26.782   19:07:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:26.782  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:26.782   19:07:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:26.782   19:07:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:26.782  [2024-12-06 19:07:57.663308] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:26.782  [2024-12-06 19:07:57.663453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514618 ]
00:09:27.040  [2024-12-06 19:07:57.797281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:27.040  [2024-12-06 19:07:57.912332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:27.975   19:07:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:27.975   19:07:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:27.975   19:07:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py load_config -i
00:09:28.233   19:07:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 514618
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 514618 ']'
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 514618
00:09:28.233    19:07:59 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:28.233    19:07:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 514618
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 514618'
00:09:28.233  killing process with pid 514618
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 514618
00:09:28.233   19:07:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 514618
00:09:30.762  
00:09:30.762  real	0m3.729s
00:09:30.762  user	0m3.900s
00:09:30.762  sys	0m0.638s
00:09:30.762   19:08:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:30.762   19:08:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:30.762  ************************************
00:09:30.762  END TEST alias_rpc
00:09:30.762  ************************************
00:09:30.762   19:08:01  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:09:30.762   19:08:01  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:09:30.762   19:08:01  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:30.762   19:08:01  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:30.762   19:08:01  -- common/autotest_common.sh@10 -- # set +x
00:09:30.762  ************************************
00:09:30.762  START TEST spdkcli_tcp
00:09:30.762  ************************************
00:09:30.762   19:08:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:09:30.762  * Looking for test storage...
00:09:30.762  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli
00:09:30.762    19:08:01 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:30.762     19:08:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:09:30.762     19:08:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:30.762    19:08:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:09:30.762    19:08:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:30.763    19:08:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:09:30.763    19:08:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:30.763     19:08:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:09:30.763    19:08:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:09:30.763    19:08:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:30.763    19:08:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:30.763    19:08:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:09:30.763    19:08:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:30.763    19:08:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:30.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.763  		--rc genhtml_branch_coverage=1
00:09:30.763  		--rc genhtml_function_coverage=1
00:09:30.763  		--rc genhtml_legend=1
00:09:30.763  		--rc geninfo_all_blocks=1
00:09:30.763  		--rc geninfo_unexecuted_blocks=1
00:09:30.763  		
00:09:30.763  		'
00:09:30.763    19:08:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:30.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.763  		--rc genhtml_branch_coverage=1
00:09:30.763  		--rc genhtml_function_coverage=1
00:09:30.763  		--rc genhtml_legend=1
00:09:30.763  		--rc geninfo_all_blocks=1
00:09:30.763  		--rc geninfo_unexecuted_blocks=1
00:09:30.763  		
00:09:30.763  		'
00:09:30.763    19:08:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:30.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.763  		--rc genhtml_branch_coverage=1
00:09:30.763  		--rc genhtml_function_coverage=1
00:09:30.763  		--rc genhtml_legend=1
00:09:30.763  		--rc geninfo_all_blocks=1
00:09:30.763  		--rc geninfo_unexecuted_blocks=1
00:09:30.763  		
00:09:30.763  		'
00:09:30.763    19:08:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:30.763  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.763  		--rc genhtml_branch_coverage=1
00:09:30.763  		--rc genhtml_function_coverage=1
00:09:30.763  		--rc genhtml_legend=1
00:09:30.763  		--rc geninfo_all_blocks=1
00:09:30.763  		--rc geninfo_unexecuted_blocks=1
00:09:30.763  		
00:09:30.763  		'
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/common.sh
00:09:30.763    19:08:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:09:30.763    19:08:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/clear_config.py
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=515204
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:09:30.763   19:08:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 515204
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 515204 ']'
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:30.763  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:30.763   19:08:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:30.763  [2024-12-06 19:08:01.444814] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:30.763  [2024-12-06 19:08:01.444957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515204 ]
00:09:30.763  [2024-12-06 19:08:01.582385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:30.763  [2024-12-06 19:08:01.703111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:30.763  [2024-12-06 19:08:01.703115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:31.707   19:08:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:31.707   19:08:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:09:31.707   19:08:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=515345
00:09:31.707   19:08:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:09:31.707   19:08:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:09:31.965  [
00:09:31.965    "bdev_malloc_delete",
00:09:31.965    "bdev_malloc_create",
00:09:31.965    "bdev_null_resize",
00:09:31.965    "bdev_null_delete",
00:09:31.965    "bdev_null_create",
00:09:31.965    "bdev_nvme_cuse_unregister",
00:09:31.965    "bdev_nvme_cuse_register",
00:09:31.965    "bdev_opal_new_user",
00:09:31.965    "bdev_opal_set_lock_state",
00:09:31.965    "bdev_opal_delete",
00:09:31.965    "bdev_opal_get_info",
00:09:31.965    "bdev_opal_create",
00:09:31.965    "bdev_nvme_opal_revert",
00:09:31.965    "bdev_nvme_opal_init",
00:09:31.965    "bdev_nvme_send_cmd",
00:09:31.965    "bdev_nvme_set_keys",
00:09:31.965    "bdev_nvme_get_path_iostat",
00:09:31.965    "bdev_nvme_get_mdns_discovery_info",
00:09:31.965    "bdev_nvme_stop_mdns_discovery",
00:09:31.965    "bdev_nvme_start_mdns_discovery",
00:09:31.965    "bdev_nvme_set_multipath_policy",
00:09:31.965    "bdev_nvme_set_preferred_path",
00:09:31.965    "bdev_nvme_get_io_paths",
00:09:31.965    "bdev_nvme_remove_error_injection",
00:09:31.965    "bdev_nvme_add_error_injection",
00:09:31.965    "bdev_nvme_get_discovery_info",
00:09:31.965    "bdev_nvme_stop_discovery",
00:09:31.965    "bdev_nvme_start_discovery",
00:09:31.965    "bdev_nvme_get_controller_health_info",
00:09:31.965    "bdev_nvme_disable_controller",
00:09:31.966    "bdev_nvme_enable_controller",
00:09:31.966    "bdev_nvme_reset_controller",
00:09:31.966    "bdev_nvme_get_transport_statistics",
00:09:31.966    "bdev_nvme_apply_firmware",
00:09:31.966    "bdev_nvme_detach_controller",
00:09:31.966    "bdev_nvme_get_controllers",
00:09:31.966    "bdev_nvme_attach_controller",
00:09:31.966    "bdev_nvme_set_hotplug",
00:09:31.966    "bdev_nvme_set_options",
00:09:31.966    "bdev_passthru_delete",
00:09:31.966    "bdev_passthru_create",
00:09:31.966    "bdev_lvol_set_parent_bdev",
00:09:31.966    "bdev_lvol_set_parent",
00:09:31.966    "bdev_lvol_check_shallow_copy",
00:09:31.966    "bdev_lvol_start_shallow_copy",
00:09:31.966    "bdev_lvol_grow_lvstore",
00:09:31.966    "bdev_lvol_get_lvols",
00:09:31.966    "bdev_lvol_get_lvstores",
00:09:31.966    "bdev_lvol_delete",
00:09:31.966    "bdev_lvol_set_read_only",
00:09:31.966    "bdev_lvol_resize",
00:09:31.966    "bdev_lvol_decouple_parent",
00:09:31.966    "bdev_lvol_inflate",
00:09:31.966    "bdev_lvol_rename",
00:09:31.966    "bdev_lvol_clone_bdev",
00:09:31.966    "bdev_lvol_clone",
00:09:31.966    "bdev_lvol_snapshot",
00:09:31.966    "bdev_lvol_create",
00:09:31.966    "bdev_lvol_delete_lvstore",
00:09:31.966    "bdev_lvol_rename_lvstore",
00:09:31.966    "bdev_lvol_create_lvstore",
00:09:31.966    "bdev_raid_set_options",
00:09:31.966    "bdev_raid_remove_base_bdev",
00:09:31.966    "bdev_raid_add_base_bdev",
00:09:31.966    "bdev_raid_delete",
00:09:31.966    "bdev_raid_create",
00:09:31.966    "bdev_raid_get_bdevs",
00:09:31.966    "bdev_error_inject_error",
00:09:31.966    "bdev_error_delete",
00:09:31.966    "bdev_error_create",
00:09:31.966    "bdev_split_delete",
00:09:31.966    "bdev_split_create",
00:09:31.966    "bdev_delay_delete",
00:09:31.966    "bdev_delay_create",
00:09:31.966    "bdev_delay_update_latency",
00:09:31.966    "bdev_zone_block_delete",
00:09:31.966    "bdev_zone_block_create",
00:09:31.966    "blobfs_create",
00:09:31.966    "blobfs_detect",
00:09:31.966    "blobfs_set_cache_size",
00:09:31.966    "bdev_crypto_delete",
00:09:31.966    "bdev_crypto_create",
00:09:31.966    "bdev_aio_delete",
00:09:31.966    "bdev_aio_rescan",
00:09:31.966    "bdev_aio_create",
00:09:31.966    "bdev_ftl_set_property",
00:09:31.966    "bdev_ftl_get_properties",
00:09:31.966    "bdev_ftl_get_stats",
00:09:31.966    "bdev_ftl_unmap",
00:09:31.966    "bdev_ftl_unload",
00:09:31.966    "bdev_ftl_delete",
00:09:31.966    "bdev_ftl_load",
00:09:31.966    "bdev_ftl_create",
00:09:31.966    "bdev_virtio_attach_controller",
00:09:31.966    "bdev_virtio_scsi_get_devices",
00:09:31.966    "bdev_virtio_detach_controller",
00:09:31.966    "bdev_virtio_blk_set_hotplug",
00:09:31.966    "bdev_iscsi_delete",
00:09:31.966    "bdev_iscsi_create",
00:09:31.966    "bdev_iscsi_set_options",
00:09:31.966    "accel_error_inject_error",
00:09:31.966    "ioat_scan_accel_module",
00:09:31.966    "dsa_scan_accel_module",
00:09:31.966    "iaa_scan_accel_module",
00:09:31.966    "dpdk_cryptodev_get_driver",
00:09:31.966    "dpdk_cryptodev_set_driver",
00:09:31.966    "dpdk_cryptodev_scan_accel_module",
00:09:31.966    "vfu_virtio_create_fs_endpoint",
00:09:31.966    "vfu_virtio_create_scsi_endpoint",
00:09:31.966    "vfu_virtio_scsi_remove_target",
00:09:31.966    "vfu_virtio_scsi_add_target",
00:09:31.966    "vfu_virtio_create_blk_endpoint",
00:09:31.966    "vfu_virtio_delete_endpoint",
00:09:31.966    "keyring_file_remove_key",
00:09:31.966    "keyring_file_add_key",
00:09:31.966    "keyring_linux_set_options",
00:09:31.966    "fsdev_aio_delete",
00:09:31.966    "fsdev_aio_create",
00:09:31.966    "iscsi_get_histogram",
00:09:31.966    "iscsi_enable_histogram",
00:09:31.966    "iscsi_set_options",
00:09:31.966    "iscsi_get_auth_groups",
00:09:31.966    "iscsi_auth_group_remove_secret",
00:09:31.966    "iscsi_auth_group_add_secret",
00:09:31.966    "iscsi_delete_auth_group",
00:09:31.966    "iscsi_create_auth_group",
00:09:31.966    "iscsi_set_discovery_auth",
00:09:31.966    "iscsi_get_options",
00:09:31.966    "iscsi_target_node_request_logout",
00:09:31.966    "iscsi_target_node_set_redirect",
00:09:31.966    "iscsi_target_node_set_auth",
00:09:31.966    "iscsi_target_node_add_lun",
00:09:31.966    "iscsi_get_stats",
00:09:31.966    "iscsi_get_connections",
00:09:31.966    "iscsi_portal_group_set_auth",
00:09:31.966    "iscsi_start_portal_group",
00:09:31.966    "iscsi_delete_portal_group",
00:09:31.966    "iscsi_create_portal_group",
00:09:31.966    "iscsi_get_portal_groups",
00:09:31.966    "iscsi_delete_target_node",
00:09:31.966    "iscsi_target_node_remove_pg_ig_maps",
00:09:31.966    "iscsi_target_node_add_pg_ig_maps",
00:09:31.966    "iscsi_create_target_node",
00:09:31.966    "iscsi_get_target_nodes",
00:09:31.966    "iscsi_delete_initiator_group",
00:09:31.966    "iscsi_initiator_group_remove_initiators",
00:09:31.966    "iscsi_initiator_group_add_initiators",
00:09:31.966    "iscsi_create_initiator_group",
00:09:31.966    "iscsi_get_initiator_groups",
00:09:31.966    "nvmf_set_crdt",
00:09:31.966    "nvmf_set_config",
00:09:31.966    "nvmf_set_max_subsystems",
00:09:31.966    "nvmf_stop_mdns_prr",
00:09:31.966    "nvmf_publish_mdns_prr",
00:09:31.966    "nvmf_subsystem_get_listeners",
00:09:31.966    "nvmf_subsystem_get_qpairs",
00:09:31.966    "nvmf_subsystem_get_controllers",
00:09:31.966    "nvmf_get_stats",
00:09:31.966    "nvmf_get_transports",
00:09:31.966    "nvmf_create_transport",
00:09:31.966    "nvmf_get_targets",
00:09:31.966    "nvmf_delete_target",
00:09:31.966    "nvmf_create_target",
00:09:31.966    "nvmf_subsystem_allow_any_host",
00:09:31.966    "nvmf_subsystem_set_keys",
00:09:31.966    "nvmf_subsystem_remove_host",
00:09:31.966    "nvmf_subsystem_add_host",
00:09:31.966    "nvmf_ns_remove_host",
00:09:31.966    "nvmf_ns_add_host",
00:09:31.966    "nvmf_subsystem_remove_ns",
00:09:31.966    "nvmf_subsystem_set_ns_ana_group",
00:09:31.966    "nvmf_subsystem_add_ns",
00:09:31.966    "nvmf_subsystem_listener_set_ana_state",
00:09:31.966    "nvmf_discovery_get_referrals",
00:09:31.966    "nvmf_discovery_remove_referral",
00:09:31.966    "nvmf_discovery_add_referral",
00:09:31.966    "nvmf_subsystem_remove_listener",
00:09:31.966    "nvmf_subsystem_add_listener",
00:09:31.966    "nvmf_delete_subsystem",
00:09:31.966    "nvmf_create_subsystem",
00:09:31.966    "nvmf_get_subsystems",
00:09:31.966    "env_dpdk_get_mem_stats",
00:09:31.966    "nbd_get_disks",
00:09:31.966    "nbd_stop_disk",
00:09:31.966    "nbd_start_disk",
00:09:31.966    "ublk_recover_disk",
00:09:31.966    "ublk_get_disks",
00:09:31.966    "ublk_stop_disk",
00:09:31.966    "ublk_start_disk",
00:09:31.966    "ublk_destroy_target",
00:09:31.966    "ublk_create_target",
00:09:31.966    "virtio_blk_create_transport",
00:09:31.966    "virtio_blk_get_transports",
00:09:31.966    "vhost_controller_set_coalescing",
00:09:31.966    "vhost_get_controllers",
00:09:31.966    "vhost_delete_controller",
00:09:31.966    "vhost_create_blk_controller",
00:09:31.966    "vhost_scsi_controller_remove_target",
00:09:31.966    "vhost_scsi_controller_add_target",
00:09:31.966    "vhost_start_scsi_controller",
00:09:31.966    "vhost_create_scsi_controller",
00:09:31.966    "thread_set_cpumask",
00:09:31.966    "scheduler_set_options",
00:09:31.966    "framework_get_governor",
00:09:31.966    "framework_get_scheduler",
00:09:31.966    "framework_set_scheduler",
00:09:31.966    "framework_get_reactors",
00:09:31.966    "thread_get_io_channels",
00:09:31.966    "thread_get_pollers",
00:09:31.966    "thread_get_stats",
00:09:31.966    "framework_monitor_context_switch",
00:09:31.966    "spdk_kill_instance",
00:09:31.966    "log_enable_timestamps",
00:09:31.966    "log_get_flags",
00:09:31.966    "log_clear_flag",
00:09:31.966    "log_set_flag",
00:09:31.966    "log_get_level",
00:09:31.966    "log_set_level",
00:09:31.966    "log_get_print_level",
00:09:31.966    "log_set_print_level",
00:09:31.966    "framework_enable_cpumask_locks",
00:09:31.966    "framework_disable_cpumask_locks",
00:09:31.966    "framework_wait_init",
00:09:31.966    "framework_start_init",
00:09:31.966    "scsi_get_devices",
00:09:31.966    "bdev_get_histogram",
00:09:31.966    "bdev_enable_histogram",
00:09:31.966    "bdev_set_qos_limit",
00:09:31.966    "bdev_set_qd_sampling_period",
00:09:31.966    "bdev_get_bdevs",
00:09:31.966    "bdev_reset_iostat",
00:09:31.966    "bdev_get_iostat",
00:09:31.966    "bdev_examine",
00:09:31.966    "bdev_wait_for_examine",
00:09:31.966    "bdev_set_options",
00:09:31.966    "accel_get_stats",
00:09:31.966    "accel_set_options",
00:09:31.966    "accel_set_driver",
00:09:31.966    "accel_crypto_key_destroy",
00:09:31.966    "accel_crypto_keys_get",
00:09:31.966    "accel_crypto_key_create",
00:09:31.966    "accel_assign_opc",
00:09:31.966    "accel_get_module_info",
00:09:31.966    "accel_get_opc_assignments",
00:09:31.966    "vmd_rescan",
00:09:31.966    "vmd_remove_device",
00:09:31.966    "vmd_enable",
00:09:31.966    "sock_get_default_impl",
00:09:31.966    "sock_set_default_impl",
00:09:31.966    "sock_impl_set_options",
00:09:31.966    "sock_impl_get_options",
00:09:31.966    "iobuf_get_stats",
00:09:31.966    "iobuf_set_options",
00:09:31.966    "keyring_get_keys",
00:09:31.966    "vfu_tgt_set_base_path",
00:09:31.966    "framework_get_pci_devices",
00:09:31.966    "framework_get_config",
00:09:31.966    "framework_get_subsystems",
00:09:31.966    "fsdev_set_opts",
00:09:31.966    "fsdev_get_opts",
00:09:31.966    "trace_get_info",
00:09:31.966    "trace_get_tpoint_group_mask",
00:09:31.966    "trace_disable_tpoint_group",
00:09:31.966    "trace_enable_tpoint_group",
00:09:31.966    "trace_clear_tpoint_mask",
00:09:31.966    "trace_set_tpoint_mask",
00:09:31.966    "notify_get_notifications",
00:09:31.966    "notify_get_types",
00:09:31.966    "spdk_get_version",
00:09:31.967    "rpc_get_methods"
00:09:31.967  ]
00:09:31.967   19:08:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:31.967   19:08:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:09:31.967   19:08:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 515204
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 515204 ']'
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 515204
00:09:31.967    19:08:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:31.967    19:08:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515204
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515204'
00:09:31.967  killing process with pid 515204
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 515204
00:09:31.967   19:08:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 515204
00:09:34.495  
00:09:34.495  real	0m3.771s
00:09:34.495  user	0m6.946s
00:09:34.495  sys	0m0.667s
00:09:34.495   19:08:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:34.495   19:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:34.495  ************************************
00:09:34.495  END TEST spdkcli_tcp
00:09:34.495  ************************************
00:09:34.495   19:08:04  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:34.495   19:08:04  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:34.495   19:08:04  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:34.495   19:08:04  -- common/autotest_common.sh@10 -- # set +x
00:09:34.495  ************************************
00:09:34.495  START TEST dpdk_mem_utility
00:09:34.495  ************************************
00:09:34.495   19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:34.495  * Looking for test storage...
00:09:34.495  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility
00:09:34.495    19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:34.495     19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:09:34.495     19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:34.495    19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:34.495     19:08:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:34.495    19:08:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:09:34.495    19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:34.495    19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:34.495  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:34.495  		--rc genhtml_branch_coverage=1
00:09:34.495  		--rc genhtml_function_coverage=1
00:09:34.495  		--rc genhtml_legend=1
00:09:34.495  		--rc geninfo_all_blocks=1
00:09:34.495  		--rc geninfo_unexecuted_blocks=1
00:09:34.495  		
00:09:34.495  		'
00:09:34.495    19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:34.495  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:34.495  		--rc genhtml_branch_coverage=1
00:09:34.495  		--rc genhtml_function_coverage=1
00:09:34.495  		--rc genhtml_legend=1
00:09:34.495  		--rc geninfo_all_blocks=1
00:09:34.495  		--rc geninfo_unexecuted_blocks=1
00:09:34.495  		
00:09:34.495  		'
00:09:34.495    19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:34.495  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:34.495  		--rc genhtml_branch_coverage=1
00:09:34.495  		--rc genhtml_function_coverage=1
00:09:34.495  		--rc genhtml_legend=1
00:09:34.495  		--rc geninfo_all_blocks=1
00:09:34.495  		--rc geninfo_unexecuted_blocks=1
00:09:34.495  		
00:09:34.495  		'
00:09:34.495    19:08:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:34.495  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:34.495  		--rc genhtml_branch_coverage=1
00:09:34.495  		--rc genhtml_function_coverage=1
00:09:34.495  		--rc genhtml_legend=1
00:09:34.495  		--rc geninfo_all_blocks=1
00:09:34.495  		--rc geninfo_unexecuted_blocks=1
00:09:34.495  		
00:09:34.495  		'
00:09:34.495   19:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:09:34.495   19:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=515690
00:09:34.495   19:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:09:34.495   19:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 515690
00:09:34.495   19:08:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 515690 ']'
00:09:34.495   19:08:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:34.495   19:08:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:34.495   19:08:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:34.495  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:34.495   19:08:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:34.495   19:08:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:34.495  [2024-12-06 19:08:05.263244] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:34.495  [2024-12-06 19:08:05.263460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515690 ]
00:09:34.495  [2024-12-06 19:08:05.420816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:34.753  [2024-12-06 19:08:05.538951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:35.688   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:35.688   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:09:35.688   19:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:09:35.688   19:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:09:35.688   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:35.688   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:35.688  {
00:09:35.688  "filename": "/tmp/spdk_mem_dump.txt"
00:09:35.688  }
00:09:35.688   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:35.688   19:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:09:35.688  DPDK memory size 824.000000 MiB in 1 heap(s)
00:09:35.688  1 heaps totaling size 824.000000 MiB
00:09:35.688    size:  824.000000 MiB heap id: 0
00:09:35.688  end heaps----------
00:09:35.688  9 mempools totaling size 603.782043 MiB
00:09:35.688    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:09:35.688    size:  158.602051 MiB name: PDU_data_out_Pool
00:09:35.688    size:  100.555481 MiB name: bdev_io_515690
00:09:35.688    size:   50.003479 MiB name: msgpool_515690
00:09:35.688    size:   36.509338 MiB name: fsdev_io_515690
00:09:35.688    size:   21.763794 MiB name: PDU_Pool
00:09:35.688    size:   19.513306 MiB name: SCSI_TASK_Pool
00:09:35.688    size:    4.133484 MiB name: evtpool_515690
00:09:35.688    size:    0.026123 MiB name: Session_Pool
00:09:35.688  end mempools-------
00:09:35.688  6 memzones totaling size 4.142822 MiB
00:09:35.688    size:    1.000366 MiB name: RG_ring_0_515690
00:09:35.688    size:    1.000366 MiB name: RG_ring_1_515690
00:09:35.688    size:    1.000366 MiB name: RG_ring_4_515690
00:09:35.688    size:    1.000366 MiB name: RG_ring_5_515690
00:09:35.688    size:    0.125366 MiB name: RG_ring_2_515690
00:09:35.688    size:    0.015991 MiB name: RG_ring_3_515690
00:09:35.688  end memzones-------
00:09:35.688   19:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:09:35.688  heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19
00:09:35.688    list of free elements. size: 16.847595 MiB
00:09:35.688      element at address: 0x200006400000 with size:    1.995972 MiB
00:09:35.688      element at address: 0x20000a600000 with size:    1.995972 MiB
00:09:35.688      element at address: 0x200003e00000 with size:    1.991028 MiB
00:09:35.688      element at address: 0x200019500040 with size:    0.999939 MiB
00:09:35.688      element at address: 0x200019900040 with size:    0.999939 MiB
00:09:35.688      element at address: 0x200019a00000 with size:    0.999329 MiB
00:09:35.688      element at address: 0x200000400000 with size:    0.998108 MiB
00:09:35.688      element at address: 0x200032600000 with size:    0.994324 MiB
00:09:35.688      element at address: 0x200019200000 with size:    0.959900 MiB
00:09:35.688      element at address: 0x200019d00040 with size:    0.937256 MiB
00:09:35.688      element at address: 0x200000200000 with size:    0.716980 MiB
00:09:35.688      element at address: 0x20001b400000 with size:    0.583191 MiB
00:09:35.688      element at address: 0x200000c00000 with size:    0.495300 MiB
00:09:35.688      element at address: 0x200019600000 with size:    0.491150 MiB
00:09:35.688      element at address: 0x200019e00000 with size:    0.485657 MiB
00:09:35.688      element at address: 0x200012c00000 with size:    0.436157 MiB
00:09:35.688      element at address: 0x200028800000 with size:    0.411072 MiB
00:09:35.688      element at address: 0x200000800000 with size:    0.355286 MiB
00:09:35.688      element at address: 0x20000a5ff040 with size:    0.001038 MiB
00:09:35.688    list of standard malloc elements. size: 199.221497 MiB
00:09:35.688      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:09:35.688      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:09:35.688      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:09:35.688      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:09:35.688      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:09:35.688      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:09:35.688      element at address: 0x200019deff40 with size:    0.062683 MiB
00:09:35.688      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:09:35.688      element at address: 0x200012bff040 with size:    0.000427 MiB
00:09:35.688      element at address: 0x200012bffa00 with size:    0.000366 MiB
00:09:35.688      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000004ffa40 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:09:35.688      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200000cff000 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ff480 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ff580 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ff680 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ff780 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ff880 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ff980 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:09:35.688      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff200 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff300 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff400 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff500 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff600 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff700 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff800 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bff900 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:09:35.688      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:09:35.688    list of memzone associated elements. size: 607.930908 MiB
00:09:35.688      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:09:35.688        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:09:35.688      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:09:35.688        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:09:35.688      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:09:35.688        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_515690_0
00:09:35.688      element at address: 0x200000dff340 with size:   48.003113 MiB
00:09:35.688        associated memzone info: size:   48.002930 MiB name: MP_msgpool_515690_0
00:09:35.688      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:09:35.688        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_515690_0
00:09:35.689      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:09:35.689        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:09:35.689      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:09:35.689        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:09:35.689      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:09:35.689        associated memzone info: size:    3.000122 MiB name: MP_evtpool_515690_0
00:09:35.689      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:09:35.689        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_515690
00:09:35.689      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:09:35.689        associated memzone info: size:    1.007996 MiB name: MP_evtpool_515690
00:09:35.689      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:09:35.689        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:09:35.689      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:09:35.689        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:09:35.689      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:09:35.689        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:09:35.689      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:09:35.689        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:09:35.689      element at address: 0x200000cff100 with size:    1.000549 MiB
00:09:35.689        associated memzone info: size:    1.000366 MiB name: RG_ring_0_515690
00:09:35.689      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:09:35.689        associated memzone info: size:    1.000366 MiB name: RG_ring_1_515690
00:09:35.689      element at address: 0x200019affd40 with size:    1.000549 MiB
00:09:35.689        associated memzone info: size:    1.000366 MiB name: RG_ring_4_515690
00:09:35.689      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:09:35.689        associated memzone info: size:    1.000366 MiB name: RG_ring_5_515690
00:09:35.689      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:09:35.689        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_515690
00:09:35.689      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:09:35.689        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_515690
00:09:35.689      element at address: 0x20001967dbc0 with size:    0.500549 MiB
00:09:35.689        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:09:35.689      element at address: 0x200012c6fa80 with size:    0.500549 MiB
00:09:35.689        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:09:35.689      element at address: 0x200019e7c540 with size:    0.250549 MiB
00:09:35.689        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:09:35.689      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:09:35.689        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_515690
00:09:35.689      element at address: 0x20000085f180 with size:    0.125549 MiB
00:09:35.689        associated memzone info: size:    0.125366 MiB name: RG_ring_2_515690
00:09:35.689      element at address: 0x2000192f5bc0 with size:    0.031799 MiB
00:09:35.689        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:09:35.689      element at address: 0x2000288693c0 with size:    0.023804 MiB
00:09:35.689        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:09:35.689      element at address: 0x20000085af40 with size:    0.016174 MiB
00:09:35.689        associated memzone info: size:    0.015991 MiB name: RG_ring_3_515690
00:09:35.689      element at address: 0x20002886f540 with size:    0.002502 MiB
00:09:35.689        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:09:35.689      element at address: 0x2000004ffb40 with size:    0.000366 MiB
00:09:35.689        associated memzone info: size:    0.000183 MiB name: MP_msgpool_515690
00:09:35.689      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:09:35.689        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_515690
00:09:35.689      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:09:35.689        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_515690
00:09:35.689      element at address: 0x20000a5ffa80 with size:    0.000366 MiB
00:09:35.689        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:09:35.689   19:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:09:35.689   19:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 515690
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 515690 ']'
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 515690
00:09:35.689    19:08:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:35.689    19:08:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515690
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515690'
00:09:35.689  killing process with pid 515690
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 515690
00:09:35.689   19:08:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 515690
00:09:37.598  
00:09:37.598  real	0m3.533s
00:09:37.598  user	0m3.552s
00:09:37.598  sys	0m0.648s
00:09:37.598   19:08:08 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:37.598   19:08:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:37.598  ************************************
00:09:37.598  END TEST dpdk_mem_utility
00:09:37.598  ************************************
00:09:37.856   19:08:08  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:09:37.857   19:08:08  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:37.857   19:08:08  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:37.857   19:08:08  -- common/autotest_common.sh@10 -- # set +x
00:09:37.857  ************************************
00:09:37.857  START TEST event
00:09:37.857  ************************************
00:09:37.857   19:08:08 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:09:37.857  * Looking for test storage...
00:09:37.857  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:09:37.857    19:08:08 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:37.857     19:08:08 event -- common/autotest_common.sh@1711 -- # lcov --version
00:09:37.857     19:08:08 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:37.857    19:08:08 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:37.857    19:08:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:37.857    19:08:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:37.857    19:08:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:37.857    19:08:08 event -- scripts/common.sh@336 -- # IFS=.-:
00:09:37.857    19:08:08 event -- scripts/common.sh@336 -- # read -ra ver1
00:09:37.857    19:08:08 event -- scripts/common.sh@337 -- # IFS=.-:
00:09:37.857    19:08:08 event -- scripts/common.sh@337 -- # read -ra ver2
00:09:37.857    19:08:08 event -- scripts/common.sh@338 -- # local 'op=<'
00:09:37.857    19:08:08 event -- scripts/common.sh@340 -- # ver1_l=2
00:09:37.857    19:08:08 event -- scripts/common.sh@341 -- # ver2_l=1
00:09:37.857    19:08:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:37.857    19:08:08 event -- scripts/common.sh@344 -- # case "$op" in
00:09:37.857    19:08:08 event -- scripts/common.sh@345 -- # : 1
00:09:37.857    19:08:08 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:37.857    19:08:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:37.857     19:08:08 event -- scripts/common.sh@365 -- # decimal 1
00:09:37.857     19:08:08 event -- scripts/common.sh@353 -- # local d=1
00:09:37.857     19:08:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:37.857     19:08:08 event -- scripts/common.sh@355 -- # echo 1
00:09:37.857    19:08:08 event -- scripts/common.sh@365 -- # ver1[v]=1
00:09:37.857     19:08:08 event -- scripts/common.sh@366 -- # decimal 2
00:09:37.857     19:08:08 event -- scripts/common.sh@353 -- # local d=2
00:09:37.857     19:08:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:37.857     19:08:08 event -- scripts/common.sh@355 -- # echo 2
00:09:37.857    19:08:08 event -- scripts/common.sh@366 -- # ver2[v]=2
00:09:37.857    19:08:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:37.857    19:08:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:37.857    19:08:08 event -- scripts/common.sh@368 -- # return 0
00:09:37.857    19:08:08 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:37.857    19:08:08 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:37.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.857  		--rc genhtml_branch_coverage=1
00:09:37.857  		--rc genhtml_function_coverage=1
00:09:37.857  		--rc genhtml_legend=1
00:09:37.857  		--rc geninfo_all_blocks=1
00:09:37.857  		--rc geninfo_unexecuted_blocks=1
00:09:37.857  		
00:09:37.857  		'
00:09:37.857    19:08:08 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:37.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.857  		--rc genhtml_branch_coverage=1
00:09:37.857  		--rc genhtml_function_coverage=1
00:09:37.857  		--rc genhtml_legend=1
00:09:37.857  		--rc geninfo_all_blocks=1
00:09:37.857  		--rc geninfo_unexecuted_blocks=1
00:09:37.857  		
00:09:37.857  		'
00:09:37.857    19:08:08 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:37.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.857  		--rc genhtml_branch_coverage=1
00:09:37.857  		--rc genhtml_function_coverage=1
00:09:37.857  		--rc genhtml_legend=1
00:09:37.857  		--rc geninfo_all_blocks=1
00:09:37.857  		--rc geninfo_unexecuted_blocks=1
00:09:37.857  		
00:09:37.857  		'
00:09:37.857    19:08:08 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:37.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.857  		--rc genhtml_branch_coverage=1
00:09:37.857  		--rc genhtml_function_coverage=1
00:09:37.857  		--rc genhtml_legend=1
00:09:37.857  		--rc geninfo_all_blocks=1
00:09:37.857  		--rc geninfo_unexecuted_blocks=1
00:09:37.857  		
00:09:37.857  		'
00:09:37.857   19:08:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/bdev/nbd_common.sh
00:09:37.857    19:08:08 event -- bdev/nbd_common.sh@6 -- # set -e
00:09:37.857   19:08:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:37.857   19:08:08 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:09:37.857   19:08:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:37.857   19:08:08 event -- common/autotest_common.sh@10 -- # set +x
00:09:37.857  ************************************
00:09:37.857  START TEST event_perf
00:09:37.857  ************************************
00:09:37.857   19:08:08 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:37.857  Running I/O for 1 seconds...[2024-12-06 19:08:08.786617] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:37.857  [2024-12-06 19:08:08.786753] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516160 ]
00:09:38.115  [2024-12-06 19:08:08.920158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:38.115  [2024-12-06 19:08:09.051536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:38.115  [2024-12-06 19:08:09.051577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:38.115  [2024-12-06 19:08:09.051634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:38.115  [2024-12-06 19:08:09.051640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:39.483  Running I/O for 1 seconds...
00:09:39.483  lcore  0:   221986
00:09:39.483  lcore  1:   221986
00:09:39.483  lcore  2:   221988
00:09:39.483  lcore  3:   221990
00:09:39.483  done.
00:09:39.483  
00:09:39.483  real	0m1.544s
00:09:39.483  user	0m4.369s
00:09:39.483  sys	0m0.160s
00:09:39.483   19:08:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:39.483   19:08:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:09:39.483  ************************************
00:09:39.483  END TEST event_perf
00:09:39.483  ************************************
00:09:39.483   19:08:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:09:39.483   19:08:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:39.483   19:08:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:39.483   19:08:10 event -- common/autotest_common.sh@10 -- # set +x
00:09:39.483  ************************************
00:09:39.483  START TEST event_reactor
00:09:39.483  ************************************
00:09:39.483   19:08:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:09:39.483  [2024-12-06 19:08:10.385121] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:39.483  [2024-12-06 19:08:10.385261] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516437 ]
00:09:39.741  [2024-12-06 19:08:10.519523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:39.741  [2024-12-06 19:08:10.644792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:41.114  test_start
00:09:41.114  oneshot
00:09:41.114  tick 100
00:09:41.114  tick 100
00:09:41.114  tick 250
00:09:41.114  tick 100
00:09:41.114  tick 100
00:09:41.114  tick 100
00:09:41.114  tick 250
00:09:41.114  tick 500
00:09:41.114  tick 100
00:09:41.114  tick 100
00:09:41.114  tick 250
00:09:41.114  tick 100
00:09:41.114  tick 100
00:09:41.114  test_end
00:09:41.114  
00:09:41.114  real	0m1.513s
00:09:41.114  user	0m1.361s
00:09:41.114  sys	0m0.144s
00:09:41.114   19:08:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:41.114   19:08:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:09:41.114  ************************************
00:09:41.114  END TEST event_reactor
00:09:41.114  ************************************
00:09:41.114   19:08:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:41.114   19:08:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:41.114   19:08:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:41.114   19:08:11 event -- common/autotest_common.sh@10 -- # set +x
00:09:41.114  ************************************
00:09:41.114  START TEST event_reactor_perf
00:09:41.114  ************************************
00:09:41.114   19:08:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:41.114  [2024-12-06 19:08:11.953717] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:41.114  [2024-12-06 19:08:11.953827] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516601 ]
00:09:41.373  [2024-12-06 19:08:12.085786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:41.373  [2024-12-06 19:08:12.200736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:42.746  test_start
00:09:42.746  test_end
00:09:42.746  Performance:   330359 events per second
00:09:42.746  
00:09:42.746  real	0m1.492s
00:09:42.746  user	0m1.362s
00:09:42.746  sys	0m0.122s
00:09:42.746   19:08:13 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:42.746   19:08:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:09:42.746  ************************************
00:09:42.746  END TEST event_reactor_perf
00:09:42.746  ************************************
00:09:42.746    19:08:13 event -- event/event.sh@49 -- # uname -s
00:09:42.746   19:08:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:09:42.746   19:08:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:09:42.746   19:08:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:42.746   19:08:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:42.746   19:08:13 event -- common/autotest_common.sh@10 -- # set +x
00:09:42.746  ************************************
00:09:42.746  START TEST event_scheduler
00:09:42.746  ************************************
00:09:42.746   19:08:13 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:09:42.746  * Looking for test storage...
00:09:42.746  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler
00:09:42.746    19:08:13 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:42.746     19:08:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:09:42.746     19:08:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:42.747    19:08:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:42.747     19:08:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:42.747    19:08:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:09:42.747    19:08:13 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:42.747    19:08:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:42.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.747  		--rc genhtml_branch_coverage=1
00:09:42.747  		--rc genhtml_function_coverage=1
00:09:42.747  		--rc genhtml_legend=1
00:09:42.747  		--rc geninfo_all_blocks=1
00:09:42.747  		--rc geninfo_unexecuted_blocks=1
00:09:42.747  		
00:09:42.747  		'
00:09:42.747    19:08:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:42.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.747  		--rc genhtml_branch_coverage=1
00:09:42.747  		--rc genhtml_function_coverage=1
00:09:42.747  		--rc genhtml_legend=1
00:09:42.747  		--rc geninfo_all_blocks=1
00:09:42.747  		--rc geninfo_unexecuted_blocks=1
00:09:42.747  		
00:09:42.747  		'
00:09:42.747    19:08:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:42.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.747  		--rc genhtml_branch_coverage=1
00:09:42.747  		--rc genhtml_function_coverage=1
00:09:42.747  		--rc genhtml_legend=1
00:09:42.747  		--rc geninfo_all_blocks=1
00:09:42.747  		--rc geninfo_unexecuted_blocks=1
00:09:42.747  		
00:09:42.747  		'
00:09:42.747    19:08:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:42.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.747  		--rc genhtml_branch_coverage=1
00:09:42.747  		--rc genhtml_function_coverage=1
00:09:42.747  		--rc genhtml_legend=1
00:09:42.747  		--rc geninfo_all_blocks=1
00:09:42.747  		--rc geninfo_unexecuted_blocks=1
00:09:42.747  		
00:09:42.747  		'
00:09:42.747   19:08:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:09:42.747   19:08:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=516910
00:09:42.747   19:08:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:09:42.747   19:08:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:09:42.747   19:08:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 516910
00:09:42.747   19:08:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 516910 ']'
00:09:42.747   19:08:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:42.747   19:08:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:42.747   19:08:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:42.747  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:42.747   19:08:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:42.747   19:08:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:42.747  [2024-12-06 19:08:13.689116] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:42.747  [2024-12-06 19:08:13.689281] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516910 ]
00:09:43.006  [2024-12-06 19:08:13.818750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:43.006  [2024-12-06 19:08:13.939586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.006  [2024-12-06 19:08:13.939647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:43.006  [2024-12-06 19:08:13.939687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:43.006  [2024-12-06 19:08:13.939719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:43.941   19:08:14 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:43.941   19:08:14 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:09:43.941   19:08:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:09:43.941   19:08:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:43.941   19:08:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:43.941  [2024-12-06 19:08:14.622739] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:09:43.941  [2024-12-06 19:08:14.622780] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:09:43.941  [2024-12-06 19:08:14.622828] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:09:43.941  [2024-12-06 19:08:14.622852] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:09:43.941  [2024-12-06 19:08:14.622873] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:09:43.941   19:08:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:43.941   19:08:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:09:43.941   19:08:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:43.941   19:08:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:44.200  [2024-12-06 19:08:14.940051] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:09:44.200   19:08:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.200   19:08:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:09:44.200   19:08:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:44.200   19:08:14 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:44.200   19:08:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:44.200  ************************************
00:09:44.200  START TEST scheduler_create_thread
00:09:44.200  ************************************
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.200  2
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.200  3
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.200   19:08:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.200  4
00:09:44.200   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.200   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:09:44.200   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201  5
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201  6
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201  7
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201  8
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201  9
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201  10
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.201    19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:44.201   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.765   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:44.765  
00:09:44.765  real	0m0.598s
00:09:44.765  user	0m0.011s
00:09:44.765  sys	0m0.005s
00:09:44.765   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:44.765   19:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:44.765  ************************************
00:09:44.765  END TEST scheduler_create_thread
00:09:44.765  ************************************
00:09:44.765   19:08:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:09:44.765   19:08:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 516910
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 516910 ']'
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 516910
00:09:44.765    19:08:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:44.765    19:08:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 516910
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 516910'
00:09:44.765  killing process with pid 516910
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 516910
00:09:44.765   19:08:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 516910
00:09:45.330  [2024-12-06 19:08:16.048300] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:09:46.348  
00:09:46.348  real	0m3.628s
00:09:46.348  user	0m7.375s
00:09:46.348  sys	0m0.500s
00:09:46.348   19:08:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:46.348   19:08:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:46.348  ************************************
00:09:46.348  END TEST event_scheduler
00:09:46.348  ************************************
00:09:46.348   19:08:17 event -- event/event.sh@51 -- # modprobe -n nbd
00:09:46.348   19:08:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:09:46.348   19:08:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:46.348   19:08:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:46.348   19:08:17 event -- common/autotest_common.sh@10 -- # set +x
00:09:46.348  ************************************
00:09:46.348  START TEST app_repeat
00:09:46.348  ************************************
00:09:46.348   19:08:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=517372
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 517372'
00:09:46.348  Process app_repeat pid: 517372
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:09:46.348  spdk_app_start Round 0
00:09:46.348   19:08:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 517372 /var/tmp/spdk-nbd.sock
00:09:46.348   19:08:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 517372 ']'
00:09:46.348   19:08:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:46.348   19:08:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:46.348   19:08:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:46.348  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:46.348   19:08:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:46.348   19:08:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:46.348  [2024-12-06 19:08:17.195922] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:09:46.348  [2024-12-06 19:08:17.196071] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517372 ]
00:09:46.607  [2024-12-06 19:08:17.327657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:46.607  [2024-12-06 19:08:17.449455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:46.607  [2024-12-06 19:08:17.449459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:47.541   19:08:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:47.541   19:08:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:47.541   19:08:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:47.800  Malloc0
00:09:47.800   19:08:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:48.059  Malloc1
00:09:48.059   19:08:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:48.059   19:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:48.318  /dev/nbd0
00:09:48.318    19:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:48.318   19:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:48.318  1+0 records in
00:09:48.318  1+0 records out
00:09:48.318  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233487 s, 17.5 MB/s
00:09:48.318    19:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:48.318   19:08:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:48.318   19:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:48.318   19:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:48.318   19:08:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:48.576  /dev/nbd1
00:09:48.576    19:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:48.576   19:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:48.576  1+0 records in
00:09:48.576  1+0 records out
00:09:48.576  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220893 s, 18.5 MB/s
00:09:48.576    19:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:48.576   19:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:48.577   19:08:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:48.577   19:08:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:48.577   19:08:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:48.577   19:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:48.577   19:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:48.577    19:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:48.577    19:08:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.577     19:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:49.144    19:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:49.144    {
00:09:49.144      "nbd_device": "/dev/nbd0",
00:09:49.144      "bdev_name": "Malloc0"
00:09:49.144    },
00:09:49.144    {
00:09:49.144      "nbd_device": "/dev/nbd1",
00:09:49.144      "bdev_name": "Malloc1"
00:09:49.144    }
00:09:49.144  ]'
00:09:49.144     19:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:49.144    {
00:09:49.144      "nbd_device": "/dev/nbd0",
00:09:49.144      "bdev_name": "Malloc0"
00:09:49.144    },
00:09:49.144    {
00:09:49.144      "nbd_device": "/dev/nbd1",
00:09:49.144      "bdev_name": "Malloc1"
00:09:49.144    }
00:09:49.144  ]'
00:09:49.144     19:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:49.144    19:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:49.144  /dev/nbd1'
00:09:49.144     19:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:49.144  /dev/nbd1'
00:09:49.144     19:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:49.144    19:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:49.144    19:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:49.144  256+0 records in
00:09:49.144  256+0 records out
00:09:49.144  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00383629 s, 273 MB/s
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:49.144  256+0 records in
00:09:49.144  256+0 records out
00:09:49.144  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252984 s, 41.4 MB/s
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:49.144  256+0 records in
00:09:49.144  256+0 records out
00:09:49.144  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289491 s, 36.2 MB/s
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:49.144   19:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:49.402    19:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:49.402   19:08:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:49.660    19:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:49.660   19:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:49.660   19:08:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:49.660   19:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:49.660   19:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:49.660   19:08:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:49.660   19:08:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:49.660   19:08:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:49.660    19:08:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:49.660    19:08:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.660     19:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:49.917    19:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:49.917     19:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:49.917     19:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:49.917    19:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:49.917     19:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:49.917     19:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:50.174     19:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:50.174    19:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:50.174    19:08:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:50.174   19:08:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:50.174   19:08:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:50.174   19:08:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:50.174   19:08:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:50.431   19:08:21 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:51.802  [2024-12-06 19:08:22.352514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:51.802  [2024-12-06 19:08:22.464907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:51.802  [2024-12-06 19:08:22.464910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:51.802  [2024-12-06 19:08:22.651755] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:51.802  [2024-12-06 19:08:22.651870] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:53.715   19:08:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:53.715   19:08:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:09:53.715  spdk_app_start Round 1
00:09:53.715   19:08:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 517372 /var/tmp/spdk-nbd.sock
00:09:53.715   19:08:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 517372 ']'
00:09:53.715   19:08:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:53.715   19:08:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:53.715   19:08:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:53.715  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:53.715   19:08:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:53.716   19:08:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:53.716   19:08:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:53.716   19:08:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:53.716   19:08:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:53.973  Malloc0
00:09:53.973   19:08:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:54.538  Malloc1
00:09:54.538   19:08:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:54.538   19:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:54.795  /dev/nbd0
00:09:54.795    19:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:54.795   19:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:54.795  1+0 records in
00:09:54.795  1+0 records out
00:09:54.795  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246537 s, 16.6 MB/s
00:09:54.795    19:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:54.795   19:08:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:54.795   19:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:54.795   19:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:54.795   19:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:55.052  /dev/nbd1
00:09:55.052    19:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:55.052   19:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:55.052  1+0 records in
00:09:55.052  1+0 records out
00:09:55.052  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216476 s, 18.9 MB/s
00:09:55.052    19:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:55.052   19:08:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:55.052   19:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:55.052   19:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:55.052    19:08:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:55.052    19:08:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:55.052     19:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:55.310    19:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:55.310    {
00:09:55.310      "nbd_device": "/dev/nbd0",
00:09:55.310      "bdev_name": "Malloc0"
00:09:55.310    },
00:09:55.310    {
00:09:55.310      "nbd_device": "/dev/nbd1",
00:09:55.310      "bdev_name": "Malloc1"
00:09:55.310    }
00:09:55.310  ]'
00:09:55.310     19:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:55.310    {
00:09:55.310      "nbd_device": "/dev/nbd0",
00:09:55.310      "bdev_name": "Malloc0"
00:09:55.310    },
00:09:55.310    {
00:09:55.310      "nbd_device": "/dev/nbd1",
00:09:55.310      "bdev_name": "Malloc1"
00:09:55.310    }
00:09:55.310  ]'
00:09:55.310     19:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:55.310    19:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:55.310  /dev/nbd1'
00:09:55.310     19:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:55.310  /dev/nbd1'
00:09:55.310     19:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:55.310    19:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:55.310    19:08:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:55.310  256+0 records in
00:09:55.310  256+0 records out
00:09:55.310  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501302 s, 209 MB/s
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:55.310  256+0 records in
00:09:55.310  256+0 records out
00:09:55.310  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249897 s, 42.0 MB/s
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:55.310  256+0 records in
00:09:55.310  256+0 records out
00:09:55.310  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289984 s, 36.2 MB/s
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:09:55.310   19:08:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:09:55.567   19:08:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:55.567   19:08:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:55.567   19:08:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:55.567   19:08:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:55.567   19:08:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:55.567   19:08:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:55.567   19:08:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:55.824    19:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:55.824   19:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:55.824   19:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:55.824   19:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:55.825   19:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:55.825   19:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:55.825   19:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:55.825   19:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:55.825   19:08:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:55.825   19:08:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:56.082    19:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:56.082   19:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:56.082   19:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:56.082   19:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:56.082   19:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:56.082   19:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:56.083   19:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:56.083   19:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:56.083    19:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:56.083    19:08:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:56.083     19:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:56.341    19:08:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:56.341     19:08:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:56.341     19:08:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:56.341    19:08:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:56.341     19:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:56.341     19:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:56.341     19:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:56.341    19:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:56.341    19:08:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:56.341   19:08:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:56.341   19:08:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:56.341   19:08:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:56.341   19:08:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:56.908   19:08:27 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:57.843  [2024-12-06 19:08:28.639055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:57.843  [2024-12-06 19:08:28.750795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:57.843  [2024-12-06 19:08:28.750796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:58.101  [2024-12-06 19:08:28.934709] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:58.101  [2024-12-06 19:08:28.934801] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:10:00.002   19:08:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:10:00.002   19:08:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:10:00.002  spdk_app_start Round 2
00:10:00.002   19:08:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 517372 /var/tmp/spdk-nbd.sock
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 517372 ']'
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:00.002  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:00.002   19:08:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:10:00.002   19:08:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:00.261  Malloc0
00:10:00.261   19:08:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:00.828  Malloc1
00:10:00.828   19:08:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:00.828   19:08:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:10:01.087  /dev/nbd0
00:10:01.087    19:08:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:10:01.087   19:08:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:01.087  1+0 records in
00:10:01.087  1+0 records out
00:10:01.087  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199887 s, 20.5 MB/s
00:10:01.087    19:08:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:01.087   19:08:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:01.087   19:08:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:01.087   19:08:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:01.087   19:08:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:10:01.346  /dev/nbd1
00:10:01.346    19:08:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:10:01.346   19:08:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:01.346  1+0 records in
00:10:01.346  1+0 records out
00:10:01.346  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197411 s, 20.7 MB/s
00:10:01.346    19:08:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:01.346   19:08:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:01.346   19:08:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:01.346   19:08:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:01.346    19:08:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:01.346    19:08:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:01.346     19:08:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:01.605    19:08:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:10:01.605    {
00:10:01.605      "nbd_device": "/dev/nbd0",
00:10:01.605      "bdev_name": "Malloc0"
00:10:01.605    },
00:10:01.605    {
00:10:01.605      "nbd_device": "/dev/nbd1",
00:10:01.605      "bdev_name": "Malloc1"
00:10:01.605    }
00:10:01.605  ]'
00:10:01.605     19:08:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:10:01.605    {
00:10:01.605      "nbd_device": "/dev/nbd0",
00:10:01.605      "bdev_name": "Malloc0"
00:10:01.605    },
00:10:01.605    {
00:10:01.605      "nbd_device": "/dev/nbd1",
00:10:01.605      "bdev_name": "Malloc1"
00:10:01.605    }
00:10:01.605  ]'
00:10:01.605     19:08:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:01.605    19:08:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:10:01.605  /dev/nbd1'
00:10:01.605     19:08:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:10:01.605  /dev/nbd1'
00:10:01.605     19:08:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:01.605    19:08:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:10:01.605    19:08:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:10:01.605  256+0 records in
00:10:01.605  256+0 records out
00:10:01.605  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512509 s, 205 MB/s
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:10:01.605  256+0 records in
00:10:01.605  256+0 records out
00:10:01.605  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246952 s, 42.5 MB/s
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:01.605   19:08:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:10:01.605  256+0 records in
00:10:01.605  256+0 records out
00:10:01.605  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292051 s, 35.9 MB/s
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:01.863   19:08:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:02.122    19:08:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:02.122   19:08:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:02.380    19:08:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:02.380   19:08:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:02.380   19:08:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:02.380   19:08:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:02.380   19:08:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:02.380   19:08:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:02.380   19:08:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:02.380   19:08:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:02.380    19:08:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:02.380    19:08:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:02.380     19:08:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:02.638    19:08:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:02.638     19:08:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:02.638     19:08:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:02.638    19:08:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:02.638     19:08:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:10:02.638     19:08:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:02.638     19:08:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:10:02.638    19:08:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:10:02.638    19:08:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:10:02.638   19:08:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:10:02.638   19:08:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:10:02.638   19:08:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:10:02.638   19:08:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:10:03.205   19:08:33 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:10:04.138  [2024-12-06 19:08:34.966953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:04.139  [2024-12-06 19:08:35.079710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:04.139  [2024-12-06 19:08:35.079714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:04.395  [2024-12-06 19:08:35.264197] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:10:04.395  [2024-12-06 19:08:35.264284] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:10:06.291   19:08:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 517372 /var/tmp/spdk-nbd.sock
00:10:06.291   19:08:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 517372 ']'
00:10:06.291   19:08:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:06.291   19:08:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:06.291   19:08:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:06.291  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:06.291   19:08:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:06.291   19:08:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:10:06.291   19:08:37 event.app_repeat -- event/event.sh@39 -- # killprocess 517372
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 517372 ']'
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 517372
00:10:06.291    19:08:37 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:06.291    19:08:37 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 517372
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 517372'
00:10:06.291  killing process with pid 517372
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@973 -- # kill 517372
00:10:06.291   19:08:37 event.app_repeat -- common/autotest_common.sh@978 -- # wait 517372
00:10:07.224  spdk_app_start is called in Round 0.
00:10:07.224  Shutdown signal received, stop current app iteration
00:10:07.224  Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization...
00:10:07.224  spdk_app_start is called in Round 1.
00:10:07.224  Shutdown signal received, stop current app iteration
00:10:07.224  Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization...
00:10:07.224  spdk_app_start is called in Round 2.
00:10:07.224  Shutdown signal received, stop current app iteration
00:10:07.224  Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization...
00:10:07.224  spdk_app_start is called in Round 3.
00:10:07.224  Shutdown signal received, stop current app iteration
00:10:07.224   19:08:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:10:07.224   19:08:38 event.app_repeat -- event/event.sh@42 -- # return 0
00:10:07.224  
00:10:07.224  real	0m21.011s
00:10:07.224  user	0m45.159s
00:10:07.224  sys	0m3.339s
00:10:07.224   19:08:38 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:07.224   19:08:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:07.224  ************************************
00:10:07.224  END TEST app_repeat
00:10:07.224  ************************************
00:10:07.483   19:08:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:10:07.483   19:08:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:10:07.483   19:08:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:07.483   19:08:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:07.483   19:08:38 event -- common/autotest_common.sh@10 -- # set +x
00:10:07.483  ************************************
00:10:07.483  START TEST cpu_locks
00:10:07.483  ************************************
00:10:07.483   19:08:38 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:10:07.483  * Looking for test storage...
00:10:07.483  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:10:07.483    19:08:38 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:07.483     19:08:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:10:07.483     19:08:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:07.483    19:08:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:07.483     19:08:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:07.483    19:08:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:10:07.483    19:08:38 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:07.483    19:08:38 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:07.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:07.483  		--rc genhtml_branch_coverage=1
00:10:07.483  		--rc genhtml_function_coverage=1
00:10:07.483  		--rc genhtml_legend=1
00:10:07.483  		--rc geninfo_all_blocks=1
00:10:07.483  		--rc geninfo_unexecuted_blocks=1
00:10:07.483  		
00:10:07.483  		'
00:10:07.483    19:08:38 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:07.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:07.483  		--rc genhtml_branch_coverage=1
00:10:07.483  		--rc genhtml_function_coverage=1
00:10:07.483  		--rc genhtml_legend=1
00:10:07.483  		--rc geninfo_all_blocks=1
00:10:07.483  		--rc geninfo_unexecuted_blocks=1
00:10:07.483  		
00:10:07.483  		'
00:10:07.483    19:08:38 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:07.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:07.483  		--rc genhtml_branch_coverage=1
00:10:07.483  		--rc genhtml_function_coverage=1
00:10:07.483  		--rc genhtml_legend=1
00:10:07.483  		--rc geninfo_all_blocks=1
00:10:07.483  		--rc geninfo_unexecuted_blocks=1
00:10:07.483  		
00:10:07.483  		'
00:10:07.483    19:08:38 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:07.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:07.483  		--rc genhtml_branch_coverage=1
00:10:07.483  		--rc genhtml_function_coverage=1
00:10:07.483  		--rc genhtml_legend=1
00:10:07.483  		--rc geninfo_all_blocks=1
00:10:07.483  		--rc geninfo_unexecuted_blocks=1
00:10:07.483  		
00:10:07.483  		'
00:10:07.483   19:08:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:10:07.483   19:08:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:10:07.484   19:08:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:10:07.484   19:08:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:10:07.484   19:08:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:07.484   19:08:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:07.484   19:08:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:07.484  ************************************
00:10:07.484  START TEST default_locks
00:10:07.484  ************************************
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=520133
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 520133
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 520133 ']'
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:07.484  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:07.484   19:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:07.741  [2024-12-06 19:08:38.476649] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:07.742  [2024-12-06 19:08:38.476782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520133 ]
00:10:07.742  [2024-12-06 19:08:38.610830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:08.000  [2024-12-06 19:08:38.730109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 520133
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 520133
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:08.933  lslocks: write error
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 520133
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 520133 ']'
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 520133
00:10:08.933    19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:08.933    19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520133
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520133'
00:10:08.933  killing process with pid 520133
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 520133
00:10:08.933   19:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 520133
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 520133
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 520133
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:11.458    19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 520133
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 520133 ']'
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:11.458  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:11.458  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (520133) - No such process
00:10:11.458  ERROR: process (pid: 520133) is no longer running
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:10:11.458  
00:10:11.458  real	0m3.446s
00:10:11.458  user	0m3.444s
00:10:11.458  sys	0m0.699s
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:11.458   19:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:11.458  ************************************
00:10:11.458  END TEST default_locks
00:10:11.458  ************************************
00:10:11.458   19:08:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:10:11.458   19:08:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:11.458   19:08:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:11.458   19:08:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:11.458  ************************************
00:10:11.458  START TEST default_locks_via_rpc
00:10:11.458  ************************************
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=520567
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 520567
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 520567 ']'
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:11.458  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:11.458   19:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:11.458  [2024-12-06 19:08:41.987537] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:11.458  [2024-12-06 19:08:41.987688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520567 ]
00:10:11.458  [2024-12-06 19:08:42.123065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:11.458  [2024-12-06 19:08:42.240653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 520567
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 520567
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 520567
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 520567 ']'
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 520567
00:10:12.393    19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:12.393    19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520567
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520567'
00:10:12.393  killing process with pid 520567
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 520567
00:10:12.393   19:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 520567
00:10:14.922  
00:10:14.922  real	0m3.461s
00:10:14.922  user	0m3.517s
00:10:14.922  sys	0m0.681s
00:10:14.922   19:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:14.922   19:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:14.922  ************************************
00:10:14.922  END TEST default_locks_via_rpc
00:10:14.922  ************************************
00:10:14.922   19:08:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:10:14.922   19:08:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:14.922   19:08:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:14.922   19:08:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:14.922  ************************************
00:10:14.922  START TEST non_locking_app_on_locked_coremask
00:10:14.922  ************************************
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=520994
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 520994 /var/tmp/spdk.sock
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 520994 ']'
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:14.922  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:14.922   19:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:14.922  [2024-12-06 19:08:45.499102] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:14.922  [2024-12-06 19:08:45.499278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520994 ]
00:10:14.922  [2024-12-06 19:08:45.632735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:14.922  [2024-12-06 19:08:45.750937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:15.858   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=521136
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 521136 /var/tmp/spdk2.sock
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 521136 ']'
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:15.859  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:15.859   19:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:15.859  [2024-12-06 19:08:46.675921] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:15.859  [2024-12-06 19:08:46.676058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521136 ]
00:10:16.117  [2024-12-06 19:08:46.862053] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:16.117  [2024-12-06 19:08:46.862127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:16.376  [2024-12-06 19:08:47.106096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 520994
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 520994
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:18.908  lslocks: write error
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 520994
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 520994 ']'
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 520994
00:10:18.908    19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:18.908    19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520994
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520994'
00:10:18.908  killing process with pid 520994
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 520994
00:10:18.908   19:08:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 520994
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 521136
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 521136 ']'
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 521136
00:10:23.082    19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:23.082    19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 521136
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 521136'
00:10:23.082  killing process with pid 521136
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 521136
00:10:23.082   19:08:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 521136
00:10:24.987  
00:10:24.987  real	0m10.545s
00:10:24.987  user	0m10.950s
00:10:24.987  sys	0m1.427s
00:10:24.987   19:08:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:24.987   19:08:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:24.987  ************************************
00:10:24.987  END TEST non_locking_app_on_locked_coremask
00:10:24.987  ************************************
00:10:25.246   19:08:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:10:25.246   19:08:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:25.246   19:08:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:25.246   19:08:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:25.246  ************************************
00:10:25.246  START TEST locking_app_on_unlocked_coremask
00:10:25.246  ************************************
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=522236
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 522236 /var/tmp/spdk.sock
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 522236 ']'
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:25.246  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:25.246   19:08:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:25.246  [2024-12-06 19:08:56.094723] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:25.246  [2024-12-06 19:08:56.094866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522236 ]
00:10:25.505  [2024-12-06 19:08:56.228240] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:25.505  [2024-12-06 19:08:56.228293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:25.505  [2024-12-06 19:08:56.353568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:26.440   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=522375
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 522375 /var/tmp/spdk2.sock
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 522375 ']'
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:26.441  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:26.441   19:08:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:26.441  [2024-12-06 19:08:57.270113] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:26.441  [2024-12-06 19:08:57.270259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522375 ]
00:10:26.699  [2024-12-06 19:08:57.460003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:26.957  [2024-12-06 19:08:57.698963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:29.488   19:08:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:29.488   19:08:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:29.488   19:08:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 522375
00:10:29.488   19:08:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 522375
00:10:29.488   19:08:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:29.746  lslocks: write error
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 522236
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 522236 ']'
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 522236
00:10:29.746    19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:29.746    19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522236
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522236'
00:10:29.746  killing process with pid 522236
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 522236
00:10:29.746   19:09:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 522236
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 522375
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 522375 ']'
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 522375
00:10:33.928    19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:33.928    19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522375
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522375'
00:10:33.928  killing process with pid 522375
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 522375
00:10:33.928   19:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 522375
00:10:35.826  
00:10:35.826  real	0m10.653s
00:10:35.826  user	0m11.122s
00:10:35.826  sys	0m1.416s
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:35.826  ************************************
00:10:35.826  END TEST locking_app_on_unlocked_coremask
00:10:35.826  ************************************
00:10:35.826   19:09:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:10:35.826   19:09:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:35.826   19:09:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:35.826   19:09:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:35.826  ************************************
00:10:35.826  START TEST locking_app_on_locked_coremask
00:10:35.826  ************************************
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=523708
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 523708 /var/tmp/spdk.sock
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 523708 ']'
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:35.826  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:35.826   19:09:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:36.084  [2024-12-06 19:09:06.800075] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:36.085  [2024-12-06 19:09:06.800232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523708 ]
00:10:36.085  [2024-12-06 19:09:06.931680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:36.342  [2024-12-06 19:09:07.050540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=523948
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 523948 /var/tmp/spdk2.sock
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 523948 /var/tmp/spdk2.sock
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:37.278    19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 523948 /var/tmp/spdk2.sock
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 523948 ']'
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:37.278  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:37.278   19:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:37.278  [2024-12-06 19:09:08.012732] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:37.278  [2024-12-06 19:09:08.012872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523948 ]
00:10:37.278  [2024-12-06 19:09:08.210233] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 523708 has claimed it.
00:10:37.278  [2024-12-06 19:09:08.210311] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:10:37.844  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (523948) - No such process
00:10:37.844  ERROR: process (pid: 523948) is no longer running
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 523708
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 523708
00:10:37.844   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:38.101  lslocks: write error
00:10:38.101   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 523708
00:10:38.101   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 523708 ']'
00:10:38.101   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 523708
00:10:38.101    19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:38.101   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:38.101    19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 523708
00:10:38.101   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:38.102   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:38.102   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 523708'
00:10:38.102  killing process with pid 523708
00:10:38.102   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 523708
00:10:38.102   19:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 523708
00:10:40.660  
00:10:40.660  real	0m4.355s
00:10:40.660  user	0m4.613s
00:10:40.660  sys	0m0.931s
00:10:40.660   19:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:40.660   19:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:40.660  ************************************
00:10:40.660  END TEST locking_app_on_locked_coremask
00:10:40.660  ************************************
00:10:40.660   19:09:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:10:40.660   19:09:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:40.660   19:09:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:40.660   19:09:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:40.660  ************************************
00:10:40.660  START TEST locking_overlapped_coremask
00:10:40.660  ************************************
00:10:40.660   19:09:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:10:40.660   19:09:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=524783
00:10:40.660   19:09:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:10:40.661   19:09:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 524783 /var/tmp/spdk.sock
00:10:40.661   19:09:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 524783 ']'
00:10:40.661   19:09:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:40.661   19:09:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:40.661   19:09:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:40.661  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:40.661   19:09:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:40.661   19:09:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:40.661  [2024-12-06 19:09:11.204284] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:40.661  [2024-12-06 19:09:11.204422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524783 ]
00:10:40.661  [2024-12-06 19:09:11.334900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:40.661  [2024-12-06 19:09:11.457412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:40.661  [2024-12-06 19:09:11.457455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:40.661  [2024-12-06 19:09:11.457459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=524931
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 524931 /var/tmp/spdk2.sock
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 524931 /var/tmp/spdk2.sock
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:41.632    19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 524931 /var/tmp/spdk2.sock
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 524931 ']'
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:41.632  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:41.632   19:09:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:41.632  [2024-12-06 19:09:12.446318] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:41.632  [2024-12-06 19:09:12.446462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524931 ]
00:10:41.890  [2024-12-06 19:09:12.638341] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 524783 has claimed it.
00:10:41.890  [2024-12-06 19:09:12.638421] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:10:42.457  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (524931) - No such process
00:10:42.457  ERROR: process (pid: 524931) is no longer running
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 524783
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 524783 ']'
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 524783
00:10:42.457    19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:42.457    19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524783
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524783'
00:10:42.457  killing process with pid 524783
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 524783
00:10:42.457   19:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 524783
00:10:44.381  
00:10:44.381  real	0m4.221s
00:10:44.381  user	0m11.568s
00:10:44.381  sys	0m0.770s
00:10:44.381   19:09:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:44.381   19:09:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:44.381  ************************************
00:10:44.381  END TEST locking_overlapped_coremask
00:10:44.382  ************************************
00:10:44.646   19:09:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:10:44.647   19:09:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:44.647   19:09:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:44.647   19:09:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:44.647  ************************************
00:10:44.647  START TEST locking_overlapped_coremask_via_rpc
00:10:44.647  ************************************
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=525251
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 525251 /var/tmp/spdk.sock
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 525251 ']'
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:44.647  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:44.647   19:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:44.647  [2024-12-06 19:09:15.478549] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:44.647  [2024-12-06 19:09:15.478676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525251 ]
00:10:44.904  [2024-12-06 19:09:15.613330] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:44.904  [2024-12-06 19:09:15.613386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:44.904  [2024-12-06 19:09:15.734685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:44.904  [2024-12-06 19:09:15.734728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:44.904  [2024-12-06 19:09:15.734754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=525468
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 525468 /var/tmp/spdk2.sock
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 525468 ']'
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:45.839  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:45.839   19:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:45.839  [2024-12-06 19:09:16.721905] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:45.839  [2024-12-06 19:09:16.722058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525468 ]
00:10:46.096  [2024-12-06 19:09:16.913893] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:46.096  [2024-12-06 19:09:16.913974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:46.354  [2024-12-06 19:09:17.176204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:46.354  [2024-12-06 19:09:17.176235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:46.354  [2024-12-06 19:09:17.176247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:10:48.876   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:48.877    19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:48.877  [2024-12-06 19:09:19.412321] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 525251 has claimed it.
00:10:48.877  request:
00:10:48.877  {
00:10:48.877  "method": "framework_enable_cpumask_locks",
00:10:48.877  "req_id": 1
00:10:48.877  }
00:10:48.877  Got JSON-RPC error response
00:10:48.877  response:
00:10:48.877  {
00:10:48.877  "code": -32603,
00:10:48.877  "message": "Failed to claim CPU core: 2"
00:10:48.877  }
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 525251 /var/tmp/spdk.sock
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 525251 ']'
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:48.877  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 525468 /var/tmp/spdk2.sock
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 525468 ']'
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:48.877  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:48.877   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:10:49.133  
00:10:49.133  real	0m4.602s
00:10:49.133  user	0m1.557s
00:10:49.133  sys	0m0.263s
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:49.133   19:09:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:49.133  ************************************
00:10:49.133  END TEST locking_overlapped_coremask_via_rpc
00:10:49.133  ************************************
00:10:49.133   19:09:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:10:49.133   19:09:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 525251 ]]
00:10:49.133   19:09:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 525251
00:10:49.133   19:09:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 525251 ']'
00:10:49.133   19:09:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 525251
00:10:49.133    19:09:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:10:49.133   19:09:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:49.133    19:09:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 525251
00:10:49.133   19:09:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:49.133   19:09:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:49.133   19:09:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 525251'
00:10:49.133  killing process with pid 525251
00:10:49.133   19:09:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 525251
00:10:49.133   19:09:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 525251
00:10:51.657   19:09:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 525468 ]]
00:10:51.657   19:09:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 525468
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 525468 ']'
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 525468
00:10:51.657    19:09:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:51.657    19:09:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 525468
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 525468'
00:10:51.657  killing process with pid 525468
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 525468
00:10:51.657   19:09:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 525468
00:10:53.560   19:09:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:10:53.560   19:09:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:10:53.560   19:09:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 525251 ]]
00:10:53.560   19:09:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 525251
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 525251 ']'
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 525251
00:10:53.560  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (525251) - No such process
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 525251 is not found'
00:10:53.560  Process with pid 525251 is not found
00:10:53.560   19:09:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 525468 ]]
00:10:53.560   19:09:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 525468
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 525468 ']'
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 525468
00:10:53.560  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (525468) - No such process
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 525468 is not found'
00:10:53.560  Process with pid 525468 is not found
00:10:53.560   19:09:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:10:53.560  
00:10:53.560  real	0m46.272s
00:10:53.560  user	1m22.101s
00:10:53.560  sys	0m7.496s
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:53.560   19:09:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:53.560  ************************************
00:10:53.560  END TEST cpu_locks
00:10:53.560  ************************************
00:10:53.560  
00:10:53.560  real	1m15.913s
00:10:53.560  user	2m21.952s
00:10:53.560  sys	0m12.017s
00:10:53.560   19:09:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:53.560   19:09:24 event -- common/autotest_common.sh@10 -- # set +x
00:10:53.560  ************************************
00:10:53.560  END TEST event
00:10:53.560  ************************************
00:10:53.818   19:09:24  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:10:53.818   19:09:24  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:53.818   19:09:24  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:53.818   19:09:24  -- common/autotest_common.sh@10 -- # set +x
00:10:53.818  ************************************
00:10:53.818  START TEST thread
00:10:53.818  ************************************
00:10:53.818   19:09:24 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:10:53.818  * Looking for test storage...
00:10:53.818  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread
00:10:53.818    19:09:24 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:53.818     19:09:24 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:10:53.818     19:09:24 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:53.818    19:09:24 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:53.818    19:09:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:53.818    19:09:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:53.818    19:09:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:53.818    19:09:24 thread -- scripts/common.sh@336 -- # IFS=.-:
00:10:53.818    19:09:24 thread -- scripts/common.sh@336 -- # read -ra ver1
00:10:53.818    19:09:24 thread -- scripts/common.sh@337 -- # IFS=.-:
00:10:53.818    19:09:24 thread -- scripts/common.sh@337 -- # read -ra ver2
00:10:53.818    19:09:24 thread -- scripts/common.sh@338 -- # local 'op=<'
00:10:53.818    19:09:24 thread -- scripts/common.sh@340 -- # ver1_l=2
00:10:53.818    19:09:24 thread -- scripts/common.sh@341 -- # ver2_l=1
00:10:53.818    19:09:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:53.818    19:09:24 thread -- scripts/common.sh@344 -- # case "$op" in
00:10:53.818    19:09:24 thread -- scripts/common.sh@345 -- # : 1
00:10:53.818    19:09:24 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:53.818    19:09:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:53.818     19:09:24 thread -- scripts/common.sh@365 -- # decimal 1
00:10:53.818     19:09:24 thread -- scripts/common.sh@353 -- # local d=1
00:10:53.818     19:09:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:53.818     19:09:24 thread -- scripts/common.sh@355 -- # echo 1
00:10:53.818    19:09:24 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:10:53.818     19:09:24 thread -- scripts/common.sh@366 -- # decimal 2
00:10:53.818     19:09:24 thread -- scripts/common.sh@353 -- # local d=2
00:10:53.818     19:09:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:53.818     19:09:24 thread -- scripts/common.sh@355 -- # echo 2
00:10:53.818    19:09:24 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:10:53.818    19:09:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:53.818    19:09:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:53.818    19:09:24 thread -- scripts/common.sh@368 -- # return 0
00:10:53.818    19:09:24 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:53.818    19:09:24 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:53.818  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:53.818  		--rc genhtml_branch_coverage=1
00:10:53.818  		--rc genhtml_function_coverage=1
00:10:53.818  		--rc genhtml_legend=1
00:10:53.818  		--rc geninfo_all_blocks=1
00:10:53.818  		--rc geninfo_unexecuted_blocks=1
00:10:53.818  		
00:10:53.818  		'
00:10:53.818    19:09:24 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:53.818  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:53.818  		--rc genhtml_branch_coverage=1
00:10:53.818  		--rc genhtml_function_coverage=1
00:10:53.818  		--rc genhtml_legend=1
00:10:53.818  		--rc geninfo_all_blocks=1
00:10:53.818  		--rc geninfo_unexecuted_blocks=1
00:10:53.819  		
00:10:53.819  		'
00:10:53.819    19:09:24 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:53.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:53.819  		--rc genhtml_branch_coverage=1
00:10:53.819  		--rc genhtml_function_coverage=1
00:10:53.819  		--rc genhtml_legend=1
00:10:53.819  		--rc geninfo_all_blocks=1
00:10:53.819  		--rc geninfo_unexecuted_blocks=1
00:10:53.819  		
00:10:53.819  		'
00:10:53.819    19:09:24 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:53.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:53.819  		--rc genhtml_branch_coverage=1
00:10:53.819  		--rc genhtml_function_coverage=1
00:10:53.819  		--rc genhtml_legend=1
00:10:53.819  		--rc geninfo_all_blocks=1
00:10:53.819  		--rc geninfo_unexecuted_blocks=1
00:10:53.819  		
00:10:53.819  		'
00:10:53.819   19:09:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:10:53.819   19:09:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:10:53.819   19:09:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:53.819   19:09:24 thread -- common/autotest_common.sh@10 -- # set +x
00:10:53.819  ************************************
00:10:53.819  START TEST thread_poller_perf
00:10:53.819  ************************************
00:10:53.819   19:09:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:10:53.819  [2024-12-06 19:09:24.745534] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:53.819  [2024-12-06 19:09:24.745657] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526537 ]
00:10:54.077  [2024-12-06 19:09:24.873465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:54.077  [2024-12-06 19:09:24.988584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:54.077  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:10:55.447  
[2024-12-06T18:09:26.397Z]  ======================================
00:10:55.447  
[2024-12-06T18:09:26.397Z]  busy:2713070079 (cyc)
00:10:55.447  
[2024-12-06T18:09:26.397Z]  total_run_count: 353000
00:10:55.447  
[2024-12-06T18:09:26.397Z]  tsc_hz: 2700000000 (cyc)
00:10:55.447  
[2024-12-06T18:09:26.397Z]  ======================================
00:10:55.447  
[2024-12-06T18:09:26.397Z]  poller_cost: 7685 (cyc), 2846 (nsec)
00:10:55.447  
00:10:55.447  real	0m1.503s
00:10:55.447  user	0m1.365s
00:10:55.447  sys	0m0.130s
00:10:55.447   19:09:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:55.447   19:09:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:10:55.447  ************************************
00:10:55.447  END TEST thread_poller_perf
00:10:55.447  ************************************
00:10:55.447   19:09:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:10:55.447   19:09:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:10:55.447   19:09:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:55.447   19:09:26 thread -- common/autotest_common.sh@10 -- # set +x
00:10:55.447  ************************************
00:10:55.447  START TEST thread_poller_perf
00:10:55.447  ************************************
00:10:55.447   19:09:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:10:55.447  [2024-12-06 19:09:26.300712] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:55.447  [2024-12-06 19:09:26.300834] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526696 ]
00:10:55.705  [2024-12-06 19:09:26.434132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:55.705  [2024-12-06 19:09:26.550482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:55.705  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:10:57.077  
[2024-12-06T18:09:28.027Z]  ======================================
00:10:57.077  
[2024-12-06T18:09:28.027Z]  busy:2704974126 (cyc)
00:10:57.077  
[2024-12-06T18:09:28.027Z]  total_run_count: 4207000
00:10:57.077  
[2024-12-06T18:09:28.027Z]  tsc_hz: 2700000000 (cyc)
00:10:57.077  
[2024-12-06T18:09:28.027Z]  ======================================
00:10:57.077  
[2024-12-06T18:09:28.027Z]  poller_cost: 642 (cyc), 237 (nsec)
00:10:57.077  
00:10:57.077  real	0m1.506s
00:10:57.077  user	0m1.360s
00:10:57.077  sys	0m0.139s
00:10:57.077   19:09:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:57.077   19:09:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:10:57.077  ************************************
00:10:57.077  END TEST thread_poller_perf
00:10:57.077  ************************************
00:10:57.077   19:09:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:10:57.077  
00:10:57.077  real	0m3.241s
00:10:57.077  user	0m2.856s
00:10:57.077  sys	0m0.383s
00:10:57.077   19:09:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:57.077   19:09:27 thread -- common/autotest_common.sh@10 -- # set +x
00:10:57.077  ************************************
00:10:57.077  END TEST thread
00:10:57.077  ************************************
00:10:57.077   19:09:27  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:10:57.077   19:09:27  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:10:57.077   19:09:27  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:57.077   19:09:27  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:57.077   19:09:27  -- common/autotest_common.sh@10 -- # set +x
00:10:57.077  ************************************
00:10:57.077  START TEST app_cmdline
00:10:57.077  ************************************
00:10:57.077   19:09:27 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:10:57.077  * Looking for test storage...
00:10:57.077  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:10:57.077    19:09:27 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:57.077     19:09:27 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:10:57.077     19:09:27 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:57.077    19:09:27 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:57.077    19:09:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@345 -- # : 1
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:57.078     19:09:27 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:57.078    19:09:27 app_cmdline -- scripts/common.sh@368 -- # return 0
00:10:57.078    19:09:27 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:57.078    19:09:27 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:57.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:57.078  		--rc genhtml_branch_coverage=1
00:10:57.078  		--rc genhtml_function_coverage=1
00:10:57.078  		--rc genhtml_legend=1
00:10:57.078  		--rc geninfo_all_blocks=1
00:10:57.078  		--rc geninfo_unexecuted_blocks=1
00:10:57.078  		
00:10:57.078  		'
00:10:57.078    19:09:27 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:57.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:57.078  		--rc genhtml_branch_coverage=1
00:10:57.078  		--rc genhtml_function_coverage=1
00:10:57.078  		--rc genhtml_legend=1
00:10:57.078  		--rc geninfo_all_blocks=1
00:10:57.078  		--rc geninfo_unexecuted_blocks=1
00:10:57.078  		
00:10:57.078  		'
00:10:57.078    19:09:27 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:57.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:57.078  		--rc genhtml_branch_coverage=1
00:10:57.078  		--rc genhtml_function_coverage=1
00:10:57.078  		--rc genhtml_legend=1
00:10:57.078  		--rc geninfo_all_blocks=1
00:10:57.078  		--rc geninfo_unexecuted_blocks=1
00:10:57.078  		
00:10:57.078  		'
00:10:57.078    19:09:27 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:57.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:57.078  		--rc genhtml_branch_coverage=1
00:10:57.078  		--rc genhtml_function_coverage=1
00:10:57.078  		--rc genhtml_legend=1
00:10:57.078  		--rc geninfo_all_blocks=1
00:10:57.078  		--rc geninfo_unexecuted_blocks=1
00:10:57.078  		
00:10:57.078  		'
00:10:57.078   19:09:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:10:57.078   19:09:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=527021
00:10:57.078   19:09:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:10:57.078   19:09:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 527021
00:10:57.078   19:09:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 527021 ']'
00:10:57.078   19:09:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:57.078   19:09:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:57.078   19:09:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:57.078  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:57.078   19:09:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:57.078   19:09:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:57.336  [2024-12-06 19:09:28.098259] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:10:57.336  [2024-12-06 19:09:28.098415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527021 ]
00:10:57.336  [2024-12-06 19:09:28.228132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:57.594  [2024-12-06 19:09:28.344968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:58.529   19:09:29 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:58.529   19:09:29 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:10:58.529  {
00:10:58.529    "version": "SPDK v25.01-pre git sha1 b6a18b192",
00:10:58.529    "fields": {
00:10:58.529      "major": 25,
00:10:58.529      "minor": 1,
00:10:58.529      "patch": 0,
00:10:58.529      "suffix": "-pre",
00:10:58.529      "commit": "b6a18b192"
00:10:58.529    }
00:10:58.529  }
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:10:58.529    19:09:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:10:58.529    19:09:29 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:58.529    19:09:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:58.529    19:09:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:10:58.529    19:09:29 app_cmdline -- app/cmdline.sh@26 -- # sort
00:10:58.529    19:09:29 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:10:58.529   19:09:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:58.529   19:09:29 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:10:58.529   19:09:29 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:58.529   19:09:29 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:10:58.529   19:09:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:58.529    19:09:29 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:10:58.529   19:09:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:58.529    19:09:29 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:10:58.787   19:09:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:58.787   19:09:29 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:10:58.787   19:09:29 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:10:58.787   19:09:29 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:58.787  request:
00:10:58.787  {
00:10:58.787    "method": "env_dpdk_get_mem_stats",
00:10:58.787    "req_id": 1
00:10:58.787  }
00:10:58.787  Got JSON-RPC error response
00:10:58.787  response:
00:10:58.787  {
00:10:58.787    "code": -32601,
00:10:58.787    "message": "Method not found"
00:10:58.787  }
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:59.044   19:09:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 527021
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 527021 ']'
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 527021
00:10:59.044    19:09:29 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:59.044    19:09:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527021
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527021'
00:10:59.044  killing process with pid 527021
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 527021
00:10:59.044   19:09:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 527021
00:11:00.944  
00:11:00.944  real	0m3.969s
00:11:00.944  user	0m4.363s
00:11:00.944  sys	0m0.691s
00:11:00.944   19:09:31 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:00.944   19:09:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:11:00.944  ************************************
00:11:00.944  END TEST app_cmdline
00:11:00.944  ************************************
00:11:00.944   19:09:31  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:11:00.944   19:09:31  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:00.944   19:09:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:00.944   19:09:31  -- common/autotest_common.sh@10 -- # set +x
00:11:00.944  ************************************
00:11:00.944  START TEST version
00:11:00.944  ************************************
00:11:00.944   19:09:31 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:11:01.202  * Looking for test storage...
00:11:01.202  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:11:01.202    19:09:31 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:01.202     19:09:31 version -- common/autotest_common.sh@1711 -- # lcov --version
00:11:01.202     19:09:31 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:01.202    19:09:32 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:01.202    19:09:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:01.202    19:09:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:01.202    19:09:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:01.202    19:09:32 version -- scripts/common.sh@336 -- # IFS=.-:
00:11:01.202    19:09:32 version -- scripts/common.sh@336 -- # read -ra ver1
00:11:01.202    19:09:32 version -- scripts/common.sh@337 -- # IFS=.-:
00:11:01.202    19:09:32 version -- scripts/common.sh@337 -- # read -ra ver2
00:11:01.202    19:09:32 version -- scripts/common.sh@338 -- # local 'op=<'
00:11:01.203    19:09:32 version -- scripts/common.sh@340 -- # ver1_l=2
00:11:01.203    19:09:32 version -- scripts/common.sh@341 -- # ver2_l=1
00:11:01.203    19:09:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:01.203    19:09:32 version -- scripts/common.sh@344 -- # case "$op" in
00:11:01.203    19:09:32 version -- scripts/common.sh@345 -- # : 1
00:11:01.203    19:09:32 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:01.203    19:09:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:01.203     19:09:32 version -- scripts/common.sh@365 -- # decimal 1
00:11:01.203     19:09:32 version -- scripts/common.sh@353 -- # local d=1
00:11:01.203     19:09:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:01.203     19:09:32 version -- scripts/common.sh@355 -- # echo 1
00:11:01.203    19:09:32 version -- scripts/common.sh@365 -- # ver1[v]=1
00:11:01.203     19:09:32 version -- scripts/common.sh@366 -- # decimal 2
00:11:01.203     19:09:32 version -- scripts/common.sh@353 -- # local d=2
00:11:01.203     19:09:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:01.203     19:09:32 version -- scripts/common.sh@355 -- # echo 2
00:11:01.203    19:09:32 version -- scripts/common.sh@366 -- # ver2[v]=2
00:11:01.203    19:09:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:01.203    19:09:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:01.203    19:09:32 version -- scripts/common.sh@368 -- # return 0
00:11:01.203    19:09:32 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:01.203    19:09:32 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:01.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.203  		--rc genhtml_branch_coverage=1
00:11:01.203  		--rc genhtml_function_coverage=1
00:11:01.203  		--rc genhtml_legend=1
00:11:01.203  		--rc geninfo_all_blocks=1
00:11:01.203  		--rc geninfo_unexecuted_blocks=1
00:11:01.203  		
00:11:01.203  		'
00:11:01.203    19:09:32 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:01.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.203  		--rc genhtml_branch_coverage=1
00:11:01.203  		--rc genhtml_function_coverage=1
00:11:01.203  		--rc genhtml_legend=1
00:11:01.203  		--rc geninfo_all_blocks=1
00:11:01.203  		--rc geninfo_unexecuted_blocks=1
00:11:01.203  		
00:11:01.203  		'
00:11:01.203    19:09:32 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:01.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.203  		--rc genhtml_branch_coverage=1
00:11:01.203  		--rc genhtml_function_coverage=1
00:11:01.203  		--rc genhtml_legend=1
00:11:01.203  		--rc geninfo_all_blocks=1
00:11:01.203  		--rc geninfo_unexecuted_blocks=1
00:11:01.203  		
00:11:01.203  		'
00:11:01.203    19:09:32 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:01.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.203  		--rc genhtml_branch_coverage=1
00:11:01.203  		--rc genhtml_function_coverage=1
00:11:01.203  		--rc genhtml_legend=1
00:11:01.203  		--rc geninfo_all_blocks=1
00:11:01.203  		--rc geninfo_unexecuted_blocks=1
00:11:01.203  		
00:11:01.203  		'
00:11:01.203    19:09:32 version -- app/version.sh@17 -- # get_header_version major
00:11:01.203    19:09:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # cut -f2
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # tr -d '"'
00:11:01.203   19:09:32 version -- app/version.sh@17 -- # major=25
00:11:01.203    19:09:32 version -- app/version.sh@18 -- # get_header_version minor
00:11:01.203    19:09:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # cut -f2
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # tr -d '"'
00:11:01.203   19:09:32 version -- app/version.sh@18 -- # minor=1
00:11:01.203    19:09:32 version -- app/version.sh@19 -- # get_header_version patch
00:11:01.203    19:09:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # cut -f2
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # tr -d '"'
00:11:01.203   19:09:32 version -- app/version.sh@19 -- # patch=0
00:11:01.203    19:09:32 version -- app/version.sh@20 -- # get_header_version suffix
00:11:01.203    19:09:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # cut -f2
00:11:01.203    19:09:32 version -- app/version.sh@14 -- # tr -d '"'
00:11:01.203   19:09:32 version -- app/version.sh@20 -- # suffix=-pre
00:11:01.203   19:09:32 version -- app/version.sh@22 -- # version=25.1
00:11:01.203   19:09:32 version -- app/version.sh@25 -- # (( patch != 0 ))
00:11:01.203   19:09:32 version -- app/version.sh@28 -- # version=25.1rc0
00:11:01.203   19:09:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python
00:11:01.203    19:09:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:11:01.203   19:09:32 version -- app/version.sh@30 -- # py_version=25.1rc0
00:11:01.203   19:09:32 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:11:01.203  
00:11:01.203  real	0m0.203s
00:11:01.203  user	0m0.135s
00:11:01.203  sys	0m0.094s
00:11:01.203   19:09:32 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:01.203   19:09:32 version -- common/autotest_common.sh@10 -- # set +x
00:11:01.203  ************************************
00:11:01.203  END TEST version
00:11:01.203  ************************************
00:11:01.203   19:09:32  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:11:01.203    19:09:32  -- spdk/autotest.sh@194 -- # uname -s
00:11:01.203   19:09:32  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:11:01.203   19:09:32  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:11:01.203   19:09:32  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:11:01.203   19:09:32  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@260 -- # timing_exit lib
00:11:01.203   19:09:32  -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:01.203   19:09:32  -- common/autotest_common.sh@10 -- # set +x
00:11:01.203   19:09:32  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@315 -- # '[' 1 -eq 1 ']'
00:11:01.203   19:09:32  -- spdk/autotest.sh@316 -- # HUGENODE=0
00:11:01.203   19:09:32  -- spdk/autotest.sh@316 -- # run_test vfio_user_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:11:01.203   19:09:32  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:01.203   19:09:32  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:01.203   19:09:32  -- common/autotest_common.sh@10 -- # set +x
00:11:01.203  ************************************
00:11:01.203  START TEST vfio_user_qemu
00:11:01.203  ************************************
00:11:01.203   19:09:32 vfio_user_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:11:01.462  * Looking for test storage...
00:11:01.462  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:11:01.462    19:09:32 vfio_user_qemu -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:01.462     19:09:32 vfio_user_qemu -- common/autotest_common.sh@1711 -- # lcov --version
00:11:01.462     19:09:32 vfio_user_qemu -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:01.462    19:09:32 vfio_user_qemu -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@344 -- # case "$op" in
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@345 -- # : 1
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@365 -- # decimal 1
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@353 -- # local d=1
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@355 -- # echo 1
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@366 -- # decimal 2
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@353 -- # local d=2
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:01.462     19:09:32 vfio_user_qemu -- scripts/common.sh@355 -- # echo 2
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:01.462    19:09:32 vfio_user_qemu -- scripts/common.sh@368 -- # return 0
00:11:01.462    19:09:32 vfio_user_qemu -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:01.462    19:09:32 vfio_user_qemu -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:01.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.462  		--rc genhtml_branch_coverage=1
00:11:01.462  		--rc genhtml_function_coverage=1
00:11:01.462  		--rc genhtml_legend=1
00:11:01.462  		--rc geninfo_all_blocks=1
00:11:01.462  		--rc geninfo_unexecuted_blocks=1
00:11:01.462  		
00:11:01.462  		'
00:11:01.462    19:09:32 vfio_user_qemu -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:01.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.462  		--rc genhtml_branch_coverage=1
00:11:01.462  		--rc genhtml_function_coverage=1
00:11:01.462  		--rc genhtml_legend=1
00:11:01.462  		--rc geninfo_all_blocks=1
00:11:01.462  		--rc geninfo_unexecuted_blocks=1
00:11:01.462  		
00:11:01.462  		'
00:11:01.462    19:09:32 vfio_user_qemu -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:01.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.462  		--rc genhtml_branch_coverage=1
00:11:01.462  		--rc genhtml_function_coverage=1
00:11:01.462  		--rc genhtml_legend=1
00:11:01.462  		--rc geninfo_all_blocks=1
00:11:01.462  		--rc geninfo_unexecuted_blocks=1
00:11:01.462  		
00:11:01.462  		'
00:11:01.462    19:09:32 vfio_user_qemu -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:01.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:01.462  		--rc genhtml_branch_coverage=1
00:11:01.462  		--rc genhtml_function_coverage=1
00:11:01.462  		--rc genhtml_legend=1
00:11:01.462  		--rc geninfo_all_blocks=1
00:11:01.462  		--rc geninfo_unexecuted_blocks=1
00:11:01.462  		
00:11:01.462  		'
00:11:01.462   19:09:32 vfio_user_qemu -- vfio_user/vfio_user.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:11:01.462    19:09:32 vfio_user_qemu -- vfio_user/common.sh@6 -- # : 128
00:11:01.462    19:09:32 vfio_user_qemu -- vfio_user/common.sh@7 -- # : 512
00:11:01.462    19:09:32 vfio_user_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@6 -- # : false
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@9 -- # : qemu-img
00:11:01.462      19:09:32 vfio_user_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:11:01.462       19:09:32 vfio_user_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh
00:11:01.462      19:09:32 vfio_user_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:11:01.462     19:09:32 vfio_user_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:11:01.462      19:09:32 vfio_user_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:11:01.463      19:09:32 vfio_user_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:11:01.463     19:09:32 vfio_user_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:11:01.463      19:09:32 vfio_user_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:11:01.463      19:09:32 vfio_user_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:11:01.463      19:09:32 vfio_user_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:11:01.463      19:09:32 vfio_user_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:11:01.463      19:09:32 vfio_user_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:11:01.463      19:09:32 vfio_user_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:11:01.463       19:09:32 vfio_user_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:11:01.463        19:09:32 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:11:01.463        19:09:32 vfio_user_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:11:01.463        19:09:32 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:11:01.463        19:09:32 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:11:01.463       19:09:32 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:11:01.463    19:09:32 vfio_user_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:01.463    19:09:32 vfio_user_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:11:01.463    19:09:32 vfio_user_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:01.463   19:09:32 vfio_user_qemu -- vfio_user/vfio_user.sh@11 -- # echo 'Running SPDK vfio-user fio autotest...'
00:11:01.463  Running SPDK vfio-user fio autotest...
00:11:01.463   19:09:32 vfio_user_qemu -- vfio_user/vfio_user.sh@13 -- # vhosttestinit
00:11:01.463   19:09:32 vfio_user_qemu -- vhost/common.sh@37 -- # '[' iso == iso ']'
00:11:01.463   19:09:32 vfio_user_qemu -- vhost/common.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:11:02.834  0000:00:04.7 (8086 0e27): Already using the vfio-pci driver
00:11:02.834  0000:00:04.6 (8086 0e26): Already using the vfio-pci driver
00:11:02.834  0000:00:04.5 (8086 0e25): Already using the vfio-pci driver
00:11:02.834  0000:00:04.4 (8086 0e24): Already using the vfio-pci driver
00:11:02.834  0000:00:04.3 (8086 0e23): Already using the vfio-pci driver
00:11:02.834  0000:00:04.2 (8086 0e22): Already using the vfio-pci driver
00:11:02.834  0000:00:04.1 (8086 0e21): Already using the vfio-pci driver
00:11:02.834  0000:00:04.0 (8086 0e20): Already using the vfio-pci driver
00:11:02.834  0000:80:04.7 (8086 0e27): Already using the vfio-pci driver
00:11:02.834  0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver
00:11:02.834  0000:80:04.6 (8086 0e26): Already using the vfio-pci driver
00:11:02.834  0000:80:04.5 (8086 0e25): Already using the vfio-pci driver
00:11:02.835  0000:80:04.4 (8086 0e24): Already using the vfio-pci driver
00:11:02.835  0000:80:04.3 (8086 0e23): Already using the vfio-pci driver
00:11:02.835  0000:80:04.2 (8086 0e22): Already using the vfio-pci driver
00:11:02.835  0000:80:04.1 (8086 0e21): Already using the vfio-pci driver
00:11:02.835  0000:80:04.0 (8086 0e20): Already using the vfio-pci driver
00:11:02.835   19:09:33 vfio_user_qemu -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:11:02.835   19:09:33 vfio_user_qemu -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:02.835   19:09:33 vfio_user_qemu -- vhost/common.sh@42 -- # gzip -dc /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz
00:11:20.928   19:09:49 vfio_user_qemu -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:20.928   19:09:49 vfio_user_qemu -- vfio_user/vfio_user.sh@15 -- # run_test vfio_user_nvme_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:11:20.928   19:09:49 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:20.928   19:09:49 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:20.928   19:09:49 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:11:20.928  ************************************
00:11:20.928  START TEST vfio_user_nvme_fio
00:11:20.928  ************************************
00:11:20.928   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:11:20.928  * Looking for test storage...
00:11:20.928  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # lcov --version
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # IFS=.-:
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # read -ra ver1
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # IFS=.-:
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # read -ra ver2
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@338 -- # local 'op=<'
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@340 -- # ver1_l=2
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@341 -- # ver2_l=1
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@344 -- # case "$op" in
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@345 -- # : 1
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # decimal 1
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=1
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 1
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # decimal 2
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=2
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 2
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # return 0
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:20.928  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.928  		--rc genhtml_branch_coverage=1
00:11:20.928  		--rc genhtml_function_coverage=1
00:11:20.928  		--rc genhtml_legend=1
00:11:20.928  		--rc geninfo_all_blocks=1
00:11:20.928  		--rc geninfo_unexecuted_blocks=1
00:11:20.928  		
00:11:20.928  		'
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:20.928  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.928  		--rc genhtml_branch_coverage=1
00:11:20.928  		--rc genhtml_function_coverage=1
00:11:20.928  		--rc genhtml_legend=1
00:11:20.928  		--rc geninfo_all_blocks=1
00:11:20.928  		--rc geninfo_unexecuted_blocks=1
00:11:20.928  		
00:11:20.928  		'
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:20.928  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.928  		--rc genhtml_branch_coverage=1
00:11:20.928  		--rc genhtml_function_coverage=1
00:11:20.928  		--rc genhtml_legend=1
00:11:20.928  		--rc geninfo_all_blocks=1
00:11:20.928  		--rc geninfo_unexecuted_blocks=1
00:11:20.928  		
00:11:20.928  		'
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:20.928  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.928  		--rc genhtml_branch_coverage=1
00:11:20.928  		--rc genhtml_function_coverage=1
00:11:20.928  		--rc genhtml_legend=1
00:11:20.928  		--rc geninfo_all_blocks=1
00:11:20.928  		--rc geninfo_unexecuted_blocks=1
00:11:20.928  		
00:11:20.928  		'
00:11:20.928   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@6 -- # : 128
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@7 -- # : 512
00:11:20.928    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@6 -- # : false
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@9 -- # : qemu-img
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:11:20.928       19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:11:20.928     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:11:20.928      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:11:20.929     19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:11:20.929      19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:11:20.929       19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:11:20.929        19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:11:20.929        19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:11:20.929        19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:11:20.929        19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:11:20.929       19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # get_vhost_dir 0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@15 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@16 -- # vm_no=2
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@18 -- # trap clean_vfio_user EXIT
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@19 -- # vhosttestinit
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@21 -- # timing_enter start_vfio_user
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@22 -- # vfio_user_run 0
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@11 -- # local vhost_name=0
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # get_vhost_dir 0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:20.929    19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@22 -- # nvmfpid=530287
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@23 -- # echo 530287
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@25 -- # echo 'Process pid: 530287'
00:11:20.929  Process pid: 530287
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:11:20.929  waiting for app to run...
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@27 -- # waitforlisten 530287 /root/vhost_test/vhost/0/rpc.sock
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@835 -- # '[' -z 530287 ']'
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:11:20.929  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:20.929   19:09:49 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:20.929  [2024-12-06 19:09:49.934473] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:11:20.929  [2024-12-06 19:09:49.934630] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530287 ]
00:11:20.929  EAL: No free 2048 kB hugepages reported on node 1
00:11:20.929  [2024-12-06 19:09:50.230682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:20.929  [2024-12-06 19:09:50.346033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:20.929  [2024-12-06 19:09:50.346159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:20.929  [2024-12-06 19:09:50.346173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:20.929  [2024-12-06 19:09:50.346182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:20.929   19:09:50 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:20.929   19:09:50 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@868 -- # return 0
00:11:20.929   19:09:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:20.929    19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # seq 0 2
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/0/muser
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/0/muser/domain/muser0/0
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode0 -s SPDK000 -a
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc0
00:11:20.929  Malloc0
00:11:20.929   19:09:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode0 Malloc0
00:11:21.188   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode0 -t VFIOUSER -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:11:21.446   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:11:21.446   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:11:21.446   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/1/muser
00:11:21.446   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:11:21.446   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:11:22.012   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:11:22.012   19:09:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc1
00:11:22.270  Malloc1
00:11:22.270   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:11:22.527   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:11:22.784   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:11:22.784   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:11:22.784   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/2/muser
00:11:22.784   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/2/muser/domain/muser2/2
00:11:22.784   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -s SPDK002 -a
00:11:23.042   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:11:23.042   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:11:23.042   19:09:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:11:26.320   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@35 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Nvme0n1
00:11:26.578   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@43 -- # timing_exit start_vfio_user
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@45 -- # used_vms=
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@46 -- # timing_enter launch_vms
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:26.837    19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # seq 0 2
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=0
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:11:26.837   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:26.837  WARN: removing existing VM in '/root/vhost_test/vms/0'
00:11:26.837  INFO: Creating new VM in /root/vhost_test/vms/0
00:11:26.838  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:11:26.838  INFO: TASK MASK: 4-5
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:11:26.838  INFO: NUMA NODE: 0
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:11:26.838  INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 0 == '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:11:26.838  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:11:26.838    19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 4-5 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/0/muser/domain/muser0/0/cntrl
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10000
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10001
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10002
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10004
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 100
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 0'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=1
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:26.838  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:11:26.838  INFO: Creating new VM in /root/vhost_test/vms/1
00:11:26.838  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:11:26.838  INFO: TASK MASK: 6-7
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:11:26.838  INFO: NUMA NODE: 0
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:11:26.838   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:11:26.839  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:11:26.839  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:11:26.839    19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10100
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10101
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10102
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10104
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 101
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 1'
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=2 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=2
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:11:26.839   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:26.839  WARN: removing existing VM in '/root/vhost_test/vms/2'
00:11:26.839  INFO: Creating new VM in /root/vhost_test/vms/2
00:11:26.839  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:11:26.839  INFO: TASK MASK: 8-9
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:11:27.098  INFO: NUMA NODE: 0
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.098   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:11:27.099  INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 2 == '' ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/2/run.sh'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/2/run.sh'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/2/run.sh'
00:11:27.099  INFO: Saving to /root/vhost_test/vms/2/run.sh
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:11:27.099    19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 8-9 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :102 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10202,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/2/qemu.pid -serial file:/root/vhost_test/vms/2/serial.log -D /root/vhost_test/vms/2/qemu.log -chardev file,path=/root/vhost_test/vms/2/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10200-:22,hostfwd=tcp::10201-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/2/muser/domain/muser2/2/cntrl
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/2/run.sh
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10200
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10201
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10202
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/2/migration_port
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10204
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 102
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 2'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@52 -- # vm_run 0 1 2
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@843 -- # local run_all=false
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@856 -- # false
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # shift 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/2/run.sh ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 2'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:11:27.099  INFO: running /root/vhost_test/vms/0/run.sh
00:11:27.099   19:09:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:11:27.099  Running VM in /root/vhost_test/vms/0
00:11:27.666  Waiting for QEMU pid file
00:11:27.924  [2024-12-06 19:09:58.782210] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:11:28.858  === qemu.log ===
00:11:28.858  === qemu.log ===
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:11:28.858  INFO: running /root/vhost_test/vms/1/run.sh
00:11:28.858   19:09:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:11:28.858  Running VM in /root/vhost_test/vms/1
00:11:29.116  Waiting for QEMU pid file
00:11:29.373  [2024-12-06 19:10:00.201952] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:11:30.303  === qemu.log ===
00:11:30.303  === qemu.log ===
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 2
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/2/run.sh'
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/2/run.sh'
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/2/run.sh'
00:11:30.303  INFO: running /root/vhost_test/vms/2/run.sh
00:11:30.303   19:10:01 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/2/run.sh
00:11:30.304  Running VM in /root/vhost_test/vms/2
00:11:30.560  Waiting for QEMU pid file
00:11:30.560  [2024-12-06 19:10:01.510075] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:11:31.491  === qemu.log ===
00:11:31.491  === qemu.log ===
00:11:31.491   19:10:02 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@53 -- # vm_wait_for_boot 60 0 1 2
00:11:31.491   19:10:02 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@913 -- # assert_number 60
00:11:31.491   19:10:02 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:11:31.491   19:10:02 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # return 0
00:11:31.491   19:10:02 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@915 -- # xtrace_disable
00:11:31.491   19:10:02 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:31.491  INFO: Waiting for VMs to boot
00:11:31.491  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:11:41.451  [2024-12-06 19:10:11.530299] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:11:41.451  [2024-12-06 19:10:11.539335] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:11:41.451  [2024-12-06 19:10:11.543358] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:11:42.017  [2024-12-06 19:10:12.905829] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:11:42.017  [2024-12-06 19:10:12.914856] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:11:42.017  [2024-12-06 19:10:12.918887] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:11:43.389  [2024-12-06 19:10:13.966004] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:11:43.389  [2024-12-06 19:10:13.976018] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:11:43.389  [2024-12-06 19:10:13.980067] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:11:53.355  
00:11:53.355  INFO: VM0 ready
00:11:53.355  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:53.355  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:53.355  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:53.355  
00:11:53.355  INFO: VM1 ready
00:11:53.613  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:53.613  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:54.548  INFO: waiting for VM2 (/root/vhost_test/vms/2)
00:11:54.805  
00:11:54.805  INFO: VM2 ready
00:11:54.805  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:55.063  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:55.997  INFO: all VMs ready
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # return 0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@55 -- # timing_exit launch_vms
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@57 -- # timing_enter run_vm_cmd
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@59 -- # fio_disks=
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_0_qemu_mask
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-0-4-5
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 0 'hostname VM-0-4-5'
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'hostname VM-0-4-5'
00:11:55.997  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM0'
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM0'
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM0'
00:11:55.997  INFO: Starting fio server on VM0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 0 'cat > /root/fio; chmod +x /root/fio'
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:55.997    19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:55.997   19:10:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:11:55.997  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:56.256   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 0 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:56.256   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:56.256   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:56.256   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.256   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:11:56.256   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:56.256    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:56.256    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:56.256    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:56.256    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.256    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:56.256    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:56.256   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:56.514  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:56.514   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 0
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 0 'grep -l SPDK /sys/class/nvme/*/model'
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:56.514     19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:56.514     19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:56.514     19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:56.514     19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.514     19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:56.514     19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:56.514    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:11:56.514  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:56.772   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:11:56.772   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:11:56.772    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=0:/dev/nvme0n1'
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_1_qemu_mask
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-1-6-7
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 1 'hostname VM-1-6-7'
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:11:56.773  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:11:56.773  INFO: Starting fio server on VM1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:56.773    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:56.773   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:11:56.773  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.031   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:57.031   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:57.031   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.031   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.031   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:11:57.031   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:57.031    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:57.031    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:57.031    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.031    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.031    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:57.031    19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:57.031   19:10:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:57.290  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.290   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 1
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 1 'grep -l SPDK /sys/class/nvme/*/model'
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:57.290     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:57.290     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:57.290     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.290     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.290     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:57.290     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:57.290    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:11:57.290  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=1:/dev/nvme0n1'
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_2_qemu_mask
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-2-8-9
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 2 'hostname VM-2-8-9'
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'hostname VM-2-8-9'
00:11:57.549  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 2
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM2'
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM2'
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM2'
00:11:57.549  INFO: Starting fio server on VM2
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 2 'cat > /root/fio; chmod +x /root/fio'
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:11:57.549    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:11:57.549   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:11:57.549  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:57.807   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 2 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:57.807   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:11:57.807   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:57.807   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.807   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:11:57.807   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:57.807    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:11:57.807    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:11:57.807    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:57.807    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:57.807    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:11:57.807    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:11:57.807   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:58.065  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:58.066   19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 2
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 2 'grep -l SPDK /sys/class/nvme/*/model'
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:58.066     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:11:58.066     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:11:58.066     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:58.066     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.066     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:11:58.066     19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:11:58.066    19:10:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:11:58.066  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:58.340   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:11:58.340   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:11:58.340    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:11:58.340   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=2:/dev/nvme0n1'
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@72 -- # job_file=default_integrity.job
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@73 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=0:/dev/nvme0n1 --vm=1:/dev/nvme0n1 --vm=2:/dev/nvme0n1
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1053 -- # local arg
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1054 -- # local job_file=
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # vms=()
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # local vms
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1057 -- # local out=
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1058 -- # local vm
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local job_fname
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=0
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=0)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 0 'cat > /root/default_integrity.job'
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/default_integrity.job'
00:11:58.341  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 0 cat /root/default_integrity.job
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:58.341    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:58.341   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 cat /root/default_integrity.job
00:11:58.341  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:58.656  [global]
00:11:58.656  blocksize_range=4k-512k
00:11:58.656  iodepth=512
00:11:58.656  iodepth_batch=128
00:11:58.656  iodepth_low=256
00:11:58.656  ioengine=libaio
00:11:58.656  size=1G
00:11:58.656  io_size=4G
00:11:58.656  filename=/dev/nvme0n1
00:11:58.656  group_reporting
00:11:58.656  thread
00:11:58.656  numjobs=1
00:11:58.656  direct=1
00:11:58.656  rw=randwrite
00:11:58.656  do_verify=1
00:11:58.656  verify=md5
00:11:58.656  verify_backlog=1024
00:11:58.656  fsync_on_close=1
00:11:58.656  verify_state_save=0
00:11:58.656  [nvme-host]
00:11:58.656   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:11:58.656    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 0
00:11:58.656    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 0
00:11:58.656    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:58.656    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.656    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/0
00:11:58.656    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/0/fio_socket
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10001 --remote-config /root/default_integrity.job '
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:11:58.657  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:58.657    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:58.657   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:11:58.657  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:58.936  [global]
00:11:58.936  blocksize_range=4k-512k
00:11:58.936  iodepth=512
00:11:58.936  iodepth_batch=128
00:11:58.936  iodepth_low=256
00:11:58.936  ioengine=libaio
00:11:58.936  size=1G
00:11:58.936  io_size=4G
00:11:58.936  filename=/dev/nvme0n1
00:11:58.936  group_reporting
00:11:58.936  thread
00:11:58.936  numjobs=1
00:11:58.936  direct=1
00:11:58.936  rw=randwrite
00:11:58.936  do_verify=1
00:11:58.936  verify=md5
00:11:58.936  verify_backlog=1024
00:11:58.936  fsync_on_close=1
00:11:58.936  verify_state_save=0
00:11:58.936  [nvme-host]
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=2
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=2)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 2 'cat > /root/default_integrity.job'
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/default_integrity.job'
00:11:58.936  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 2 cat /root/default_integrity.job
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:11:58.936    19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:11:58.936   19:10:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 cat /root/default_integrity.job
00:11:59.193  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:11:59.193  [global]
00:11:59.193  blocksize_range=4k-512k
00:11:59.193  iodepth=512
00:11:59.193  iodepth_batch=128
00:11:59.193  iodepth_low=256
00:11:59.193  ioengine=libaio
00:11:59.193  size=1G
00:11:59.193  io_size=4G
00:11:59.193  filename=/dev/nvme0n1
00:11:59.193  group_reporting
00:11:59.193  thread
00:11:59.193  numjobs=1
00:11:59.193  direct=1
00:11:59.193  rw=randwrite
00:11:59.193  do_verify=1
00:11:59.193  verify=md5
00:11:59.193  verify_backlog=1024
00:11:59.193  fsync_on_close=1
00:11:59.193  verify_state_save=0
00:11:59.193  [nvme-host]
00:11:59.193   19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:11:59.193    19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 2
00:11:59.194    19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 2
00:11:59.194    19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:59.194    19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:11:59.194    19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/2
00:11:59.194    19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/2/fio_socket
00:11:59.194   19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10201 --remote-config /root/default_integrity.job '
00:11:59.194   19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:11:59.194   19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1147 -- # true
00:11:59.194   19:10:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10001 --remote-config /root/default_integrity.job --client=127.0.0.1,10101 --remote-config /root/default_integrity.job --client=127.0.0.1,10201 --remote-config /root/default_integrity.job
00:12:14.062   19:10:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1162 -- # sleep 1
00:12:14.063   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:12:14.063   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:12:14.063   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:12:14.063  hostname=VM-2-8-9, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:12:14.063  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:12:14.063  hostname=VM-0-4-5, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:12:14.063  <VM-2-8-9> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:12:14.063  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:12:14.063  <VM-0-4-5> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:12:14.063  <VM-1-6-7> Starting 1 thread
00:12:14.063  <VM-0-4-5> Starting 1 thread
00:12:14.063  <VM-2-8-9> Starting 1 thread
00:12:14.063  <VM-1-6-7> 
00:12:14.063  nvme-host: (groupid=0, jobs=1): err= 0: pid=945: Fri Dec  6 19:10:41 2024
00:12:14.063    read: IOPS=1024, BW=200MiB/s (209MB/s)(2072MiB/10380msec)
00:12:14.063      slat (usec): min=29, max=19563, avg=10977.31, stdev=6640.67
00:12:14.063      clat (usec): min=1968, max=54512, avg=25083.74, stdev=12912.22
00:12:14.063       lat (usec): min=2032, max=55280, avg=36061.06, stdev=11600.96
00:12:14.063      clat percentiles (usec):
00:12:14.063       |  1.00th=[ 2057],  5.00th=[ 3261], 10.00th=[12649], 20.00th=[13829],
00:12:14.063       | 30.00th=[15533], 40.00th=[16450], 50.00th=[25035], 60.00th=[30016],
00:12:14.063       | 70.00th=[32113], 80.00th=[36439], 90.00th=[45876], 95.00th=[47973],
00:12:14.063       | 99.00th=[50594], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264],
00:12:14.063       | 99.99th=[54264]
00:12:14.063    write: IOPS=2084, BW=406MiB/s (426MB/s)(2072MiB/5103msec); 0 zone resets
00:12:14.063      slat (usec): min=289, max=70988, avg=26798.62, stdev=16721.26
00:12:14.063      clat (msec): min=3, max=195, avg=64.36, stdev=47.08
00:12:14.063       lat (msec): min=4, max=200, avg=91.16, stdev=52.46
00:12:14.063      clat percentiles (msec):
00:12:14.063       |  1.00th=[    5],  5.00th=[    8], 10.00th=[   11], 20.00th=[   16],
00:12:14.063       | 30.00th=[   21], 40.00th=[   41], 50.00th=[   62], 60.00th=[   67],
00:12:14.063       | 70.00th=[   96], 80.00th=[  122], 90.00th=[  131], 95.00th=[  144],
00:12:14.063       | 99.00th=[  174], 99.50th=[  176], 99.90th=[  192], 99.95th=[  197],
00:12:14.063       | 99.99th=[  197]
00:12:14.063     bw (  KiB/s): min=157144, max=314288, per=48.56%, avg=201894.10, stdev=72506.22, samples=21
00:12:14.063     iops        : min=  788, max= 1576, avg=1012.38, stdev=363.55, samples=21
00:12:14.063    lat (msec)   : 2=0.04%, 4=3.21%, 10=4.04%, 20=31.23%, 50=31.80%
00:12:14.063    lat (msec)   : 100=16.36%, 250=13.33%
00:12:14.063    cpu          : usr=81.81%, sys=1.97%, ctx=820, majf=0, minf=16
00:12:14.063    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:12:14.063       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:12:14.063       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:12:14.063       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:14.063       latency   : target=0, window=0, percentile=100.00%, depth=512
00:12:14.063  
00:12:14.063  Run status group 0 (all jobs):
00:12:14.063     READ: bw=200MiB/s (209MB/s), 200MiB/s-200MiB/s (209MB/s-209MB/s), io=2072MiB (2172MB), run=10380-10380msec
00:12:14.063    WRITE: bw=406MiB/s (426MB/s), 406MiB/s-406MiB/s (426MB/s-426MB/s), io=2072MiB (2172MB), run=5103-5103msec
00:12:14.063  
00:12:14.063  Disk stats (read/write):
00:12:14.063    nvme0n1: ios=80/0, merge=0/0, ticks=7/0, in_queue=7, util=26.50%
00:12:14.063  <VM-2-8-9> 
00:12:14.063  nvme-host: (groupid=0, jobs=1): err= 0: pid=942: Fri Dec  6 19:10:42 2024
00:12:14.063    read: IOPS=1142, BW=192MiB/s (201MB/s)(2048MiB/10685msec)
00:12:14.063      slat (usec): min=48, max=38355, avg=10398.91, stdev=7071.41
00:12:14.063      clat (msec): min=5, max=354, avg=150.34, stdev=76.28
00:12:14.063       lat (msec): min=10, max=368, avg=160.74, stdev=76.99
00:12:14.063      clat percentiles (msec):
00:12:14.063       |  1.00th=[    8],  5.00th=[   27], 10.00th=[   55], 20.00th=[   80],
00:12:14.063       | 30.00th=[  104], 40.00th=[  128], 50.00th=[  148], 60.00th=[  169],
00:12:14.063       | 70.00th=[  192], 80.00th=[  213], 90.00th=[  255], 95.00th=[  284],
00:12:14.063       | 99.00th=[  326], 99.50th=[  338], 99.90th=[  351], 99.95th=[  351],
00:12:14.063       | 99.99th=[  355]
00:12:14.063    write: IOPS=1214, BW=204MiB/s (214MB/s)(2048MiB/10048msec); 0 zone resets
00:12:14.063      slat (usec): min=344, max=95419, avg=29169.59, stdev=18648.49
00:12:14.063      clat (msec): min=11, max=333, avg=139.72, stdev=71.97
00:12:14.063       lat (msec): min=12, max=393, avg=168.89, stdev=76.82
00:12:14.063      clat percentiles (msec):
00:12:14.063       |  1.00th=[   14],  5.00th=[   31], 10.00th=[   43], 20.00th=[   73],
00:12:14.063       | 30.00th=[  101], 40.00th=[  115], 50.00th=[  134], 60.00th=[  153],
00:12:14.063       | 70.00th=[  174], 80.00th=[  199], 90.00th=[  245], 95.00th=[  268],
00:12:14.063       | 99.00th=[  334], 99.50th=[  334], 99.90th=[  334], 99.95th=[  334],
00:12:14.063       | 99.99th=[  334]
00:12:14.063     bw (  KiB/s): min= 3968, max=407248, per=100.00%, avg=220752.84, stdev=101974.05, samples=19
00:12:14.063     iops        : min=   34, max= 2048, avg=1285.05, stdev=623.07, samples=19
00:12:14.063    lat (msec)   : 10=0.63%, 20=2.90%, 50=6.35%, 100=19.75%, 250=60.81%
00:12:14.063    lat (msec)   : 500=9.56%
00:12:14.063    cpu          : usr=80.19%, sys=2.12%, ctx=561, majf=0, minf=34
00:12:14.063    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:12:14.063       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:12:14.063       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:12:14.063       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:14.063       latency   : target=0, window=0, percentile=100.00%, depth=512
00:12:14.063  
00:12:14.063  Run status group 0 (all jobs):
00:12:14.063     READ: bw=192MiB/s (201MB/s), 192MiB/s-192MiB/s (201MB/s-201MB/s), io=2048MiB (2147MB), run=10685-10685msec
00:12:14.063    WRITE: bw=204MiB/s (214MB/s), 204MiB/s-204MiB/s (214MB/s-214MB/s), io=2048MiB (2147MB), run=10048-10048msec
00:12:14.063  
00:12:14.063  Disk stats (read/write):
00:12:14.063    nvme0n1: ios=5/0, merge=0/0, ticks=5/0, in_queue=5, util=21.58%
00:12:14.063  <VM-0-4-5> 
00:12:14.063  nvme-host: (groupid=0, jobs=1): err= 0: pid=945: Fri Dec  6 19:10:42 2024
00:12:14.063    read: IOPS=991, BW=193MiB/s (202MB/s)(2072MiB/10733msec)
00:12:14.063      slat (usec): min=26, max=25038, avg=12273.15, stdev=7402.82
00:12:14.063      clat (usec): min=2049, max=71852, avg=27750.05, stdev=15030.76
00:12:14.063       lat (usec): min=8087, max=72240, avg=40023.20, stdev=14618.52
00:12:14.063      clat percentiles (usec):
00:12:14.063       |  1.00th=[ 2180],  5.00th=[ 8291], 10.00th=[11076], 20.00th=[14353],
00:12:14.063       | 30.00th=[16319], 40.00th=[17957], 50.00th=[25297], 60.00th=[31065],
00:12:14.063       | 70.00th=[36439], 80.00th=[41681], 90.00th=[48497], 95.00th=[56361],
00:12:14.063       | 99.00th=[64226], 99.50th=[64226], 99.90th=[71828], 99.95th=[71828],
00:12:14.063       | 99.99th=[71828]
00:12:14.063    write: IOPS=2007, BW=391MiB/s (410MB/s)(2072MiB/5298msec); 0 zone resets
00:12:14.063      slat (usec): min=265, max=79027, avg=27918.36, stdev=17375.00
00:12:14.063      clat (msec): min=3, max=204, avg=67.10, stdev=48.45
00:12:14.063       lat (msec): min=4, max=224, avg=95.02, stdev=54.06
00:12:14.063      clat percentiles (msec):
00:12:14.063       |  1.00th=[    5],  5.00th=[    9], 10.00th=[   12], 20.00th=[   18],
00:12:14.063       | 30.00th=[   23], 40.00th=[   47], 50.00th=[   62], 60.00th=[   70],
00:12:14.063       | 70.00th=[  105], 80.00th=[  121], 90.00th=[  134], 95.00th=[  142],
00:12:14.063       | 99.00th=[  182], 99.50th=[  192], 99.90th=[  194], 99.95th=[  194],
00:12:14.063       | 99.99th=[  205]
00:12:14.063     bw (  KiB/s): min=157144, max=314288, per=48.59%, avg=194559.24, stdev=68583.26, samples=21
00:12:14.063     iops        : min=  788, max= 1576, avg=975.62, stdev=343.91, samples=21
00:12:14.063    lat (msec)   : 4=1.43%, 10=5.13%, 20=28.45%, 50=30.79%, 100=18.46%
00:12:14.063    lat (msec)   : 250=15.74%
00:12:14.063    cpu          : usr=79.20%, sys=1.79%, ctx=882, majf=0, minf=16
00:12:14.063    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:12:14.063       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:12:14.063       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:12:14.063       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:14.063       latency   : target=0, window=0, percentile=100.00%, depth=512
00:12:14.063  
00:12:14.063  Run status group 0 (all jobs):
00:12:14.063     READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=2072MiB (2172MB), run=10733-10733msec
00:12:14.063    WRITE: bw=391MiB/s (410MB/s), 391MiB/s-391MiB/s (410MB/s-410MB/s), io=2072MiB (2172MB), run=5298-5298msec
00:12:14.063  
00:12:14.063  Disk stats (read/write):
00:12:14.063    nvme0n1: ios=80/0, merge=0/0, ticks=53/0, in_queue=53, util=22.54%
00:12:14.063  All clients: (groupid=0, jobs=3): err= 0: pid=0: Fri Dec  6 19:10:42 2024
00:12:14.063    read: IOPS=3119, BW=577Mi (605M)(6191MiB/10733msec)
00:12:14.063      slat (usec): min=26, max=38355, avg=11178.12, stdev=7089.75
00:12:14.063      clat (usec): min=1968, max=354581, avg=71599.99, stdev=76192.66
00:12:14.063       lat (msec): min=2, max=368, avg=82.78, stdev=75.91
00:12:14.063    write: IOPS=3332, BW=616Mi (646M)(6191MiB/10048msec); 0 zone resets
00:12:14.063      slat (usec): min=265, max=95419, avg=28018.80, stdev=17676.98
00:12:14.063      clat (msec): min=3, max=333, avg=92.71, stdev=67.88
00:12:14.063       lat (msec): min=4, max=393, avg=120.73, stdev=72.72
00:12:14.063     bw (  KiB/s): min=318256, max=1035824, per=60.22%, avg=617206.18, stdev=80248.89, samples=61
00:12:14.063     iops        : min= 1610, max= 5200, avg=3273.05, stdev=447.15, samples=61
00:12:14.063    lat (msec)   : 2=0.01%, 4=1.47%, 10=3.15%, 20=20.02%, 50=22.20%
00:12:14.063    lat (msec)   : 100=18.26%, 250=31.41%, 500=3.48%
00:12:14.063    cpu          : usr=80.38%, sys=1.96%, ctx=2263, majf=0, minf=66
00:12:14.063    IO depths    : 1=0.0%, 2=0.4%, 4=0.8%, 8=1.1%, 16=2.3%, 32=5.2%, >=64=90.0%
00:12:14.063       submit    : 0=0.0%, 4=1.2%, 8=1.6%, 16=2.1%, 32=4.1%, 64=14.4%, >=64=76.6%
00:12:14.063       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:12:14.063       issued rwts: total=33484,33484,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:14.063   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@75 -- # timing_exit run_vm_cmd
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@77 -- # vm_shutdown_all
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vm_list_all
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/0 ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531376
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531376
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0'
00:12:14.064  INFO: Shutting down virtual machine /root/vhost_test/vms/0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:14.064  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM0 is shutting down - wait a while to complete'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete'
00:12:14.064  INFO: VM0 is shutting down - wait a while to complete
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531549
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531549
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:14.064  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:14.064    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:14.064  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:14.064  INFO: VM1 is shutting down - wait a while to complete
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 2
00:12:14.064   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/2 ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531823
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531823
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/2'
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/2'
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/2'
00:12:14.065  INFO: Shutting down virtual machine /root/vhost_test/vms/2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 2 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:14.065  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM2 is shutting down - wait a while to complete'
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM2 is shutting down - wait a while to complete'
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM2 is shutting down - wait a while to complete'
00:12:14.065  INFO: VM2 is shutting down - wait a while to complete
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:14.065  INFO: Waiting for VMs to shutdown...
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531376
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531376
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531549
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531549
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.065    19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531823
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531823
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.065   19:10:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:12:14.065  [2024-12-06 19:10:44.734455] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.065    19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531549
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531549
00:12:14.065   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:14.066    19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=531823
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 531823
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:12:14.066   19:10:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:12:14.337  [2024-12-06 19:10:45.038390] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:12:14.338  [2024-12-06 19:10:45.199533] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 2 > 0 ))
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:15.269   19:10:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:12:16.201  INFO: All VMs successfully shut down
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@505 -- # return 0
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@79 -- # timing_enter clean_vfio_user
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:12:16.201    19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # seq 0 2
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:12:16.201   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode0 -t vfiouser -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:12:16.458   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode0
00:12:16.716   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:12:16.716   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc0
00:12:17.280   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:12:17.281   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:12:17.281   19:10:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:12:17.281   19:10:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:12:17.537   19:10:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:12:17.538   19:10:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc1
00:12:18.470   19:10:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:12:18.470   19:10:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:12:18.470   19:10:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode2 -t vfiouser -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:12:18.470   19:10:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode2
00:12:18.728   19:10:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:12:18.728   19:10:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@86 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@92 -- # vhost_kill 0
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:12:20.626    19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:12:20.626    19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:12:20.626    19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:20.626    19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@220 -- # local vhost_pid
00:12:20.626    19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # vhost_pid=530287
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 530287) app'
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 530287) app'
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 530287) app'
00:12:20.626  INFO: killing vhost (PID 530287) app
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@224 -- # kill -INT 530287
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:20.626  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 530287
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@228 -- # echo .
00:12:20.626  .
00:12:20.626   19:10:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@229 -- # sleep 1
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i++ ))
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 530287
00:12:21.566  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (530287) - No such process
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@231 -- # break
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@234 -- # kill -0 530287
00:12:21.566  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (530287) - No such process
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@239 -- # kill -0 530287
00:12:21.566  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (530287) - No such process
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@245 -- # is_pid_child 530287
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1686 -- # local pid=530287 _pid
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:12:21.566    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1685 -- # jobs -pr
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1692 -- # return 1
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@261 -- # return 0
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@93 -- # timing_exit clean_vfio_user
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@94 -- # vhosttestfini
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@1 -- # clean_vfio_user
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@6 -- # vm_kill_all
00:12:21.566   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@476 -- # local vm
00:12:21.566    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # vm_list_all
00:12:21.566    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:12:21.566    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 1
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 2
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 2
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/2
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@7 -- # vhost_kill 0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@215 -- # warning 'no vhost pid file found'
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@90 -- # message WARN 'no vhost pid file found'
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=WARN
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'WARN: no vhost pid file found'
00:12:21.567  WARN: no vhost pid file found
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@216 -- # return 0
00:12:21.567  
00:12:21.567  real	1m2.632s
00:12:21.567  user	4m8.618s
00:12:21.567  sys	0m3.656s
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:12:21.567  ************************************
00:12:21.567  END TEST vfio_user_nvme_fio
00:12:21.567  ************************************
00:12:21.567   19:10:52 vfio_user_qemu -- vfio_user/vfio_user.sh@16 -- # run_test vfio_user_nvme_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:12:21.567   19:10:52 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:21.567   19:10:52 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:21.567   19:10:52 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:12:21.567  ************************************
00:12:21.567  START TEST vfio_user_nvme_restart_vm
00:12:21.567  ************************************
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:12:21.567  * Looking for test storage...
00:12:21.567  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@345 -- # : 1
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=1
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 1
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=2
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:21.567     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 2
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # return 0
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:21.567  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.567  		--rc genhtml_branch_coverage=1
00:12:21.567  		--rc genhtml_function_coverage=1
00:12:21.567  		--rc genhtml_legend=1
00:12:21.567  		--rc geninfo_all_blocks=1
00:12:21.567  		--rc geninfo_unexecuted_blocks=1
00:12:21.567  		
00:12:21.567  		'
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:21.567  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.567  		--rc genhtml_branch_coverage=1
00:12:21.567  		--rc genhtml_function_coverage=1
00:12:21.567  		--rc genhtml_legend=1
00:12:21.567  		--rc geninfo_all_blocks=1
00:12:21.567  		--rc geninfo_unexecuted_blocks=1
00:12:21.567  		
00:12:21.567  		'
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:21.567  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.567  		--rc genhtml_branch_coverage=1
00:12:21.567  		--rc genhtml_function_coverage=1
00:12:21.567  		--rc genhtml_legend=1
00:12:21.567  		--rc geninfo_all_blocks=1
00:12:21.567  		--rc geninfo_unexecuted_blocks=1
00:12:21.567  		
00:12:21.567  		'
00:12:21.567    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:21.567  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.567  		--rc genhtml_branch_coverage=1
00:12:21.567  		--rc genhtml_function_coverage=1
00:12:21.567  		--rc genhtml_legend=1
00:12:21.567  		--rc geninfo_all_blocks=1
00:12:21.567  		--rc geninfo_unexecuted_blocks=1
00:12:21.567  		
00:12:21.567  		'
00:12:21.567   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@6 -- # : false
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:12:21.568       19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:12:21.568      19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:12:21.568       19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:12:21.568        19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:12:21.568        19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:12:21.568        19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:12:21.568        19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:12:21.568       19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:21.568   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:12:21.568   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:12:21.568   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # get_nvme_bdfs
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:12:21.568    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:12:21.568     19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # get_vhost_dir 0
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@16 -- # trap clean_vfio_user EXIT
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@18 -- # vhosttestinit
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@20 -- # vfio_user_run 0
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@11 -- # local vhost_name=0
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # get_vhost_dir 0
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:21.828    19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@22 -- # nvmfpid=538253
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@23 -- # echo 538253
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@25 -- # echo 'Process pid: 538253'
00:12:21.828  Process pid: 538253
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:12:21.828  waiting for app to run...
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@27 -- # waitforlisten 538253 /root/vhost_test/vhost/0/rpc.sock
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 538253 ']'
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:12:21.828  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:21.828   19:10:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:21.829  [2024-12-06 19:10:52.645283] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:12:21.829  [2024-12-06 19:10:52.645454] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid538253 ]
00:12:21.829  EAL: No free 2048 kB hugepages reported on node 1
00:12:22.395  [2024-12-06 19:10:53.050801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:22.395  [2024-12-06 19:10:53.167765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:22.395  [2024-12-06 19:10:53.167885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:22.395  [2024-12-06 19:10:53.167923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:22.395  [2024-12-06 19:10:53.167952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:12:22.653   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:22.654   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:12:22.654   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:12:22.911   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:12:22.912   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:22.912   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:23.170   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@22 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:12:23.170   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@23 -- # rm -rf /root/vhost_test/vms/1/muser
00:12:23.170   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@24 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:12:23.170   19:10:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@26 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0b:00.0
00:12:26.449  Nvme0n1
00:12:26.449   19:10:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:12:26.449   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Nvme0n1
00:12:26.707   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@31 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:26.964  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:12:26.964  INFO: Creating new VM in /root/vhost_test/vms/1
00:12:26.964  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:12:26.964  INFO: TASK MASK: 6-7
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:26.964   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:12:26.965  INFO: NUMA NODE: 0
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:12:26.965  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:12:26.965  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:12:26.965    19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@32 -- # vm_run 1
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:12:26.965  INFO: running /root/vhost_test/vms/1/run.sh
00:12:26.965   19:10:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:12:27.223  Running VM in /root/vhost_test/vms/1
00:12:27.481  Waiting for QEMU pid file
00:12:27.737  [2024-12-06 19:10:58.564305] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:12:28.670  === qemu.log ===
00:12:28.670  === qemu.log ===
00:12:28.670   19:10:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@33 -- # vm_wait_for_boot 60 1
00:12:28.670   19:10:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:12:28.670   19:10:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:12:28.670   19:10:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:12:28.670   19:10:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:12:28.670   19:10:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:28.670  INFO: Waiting for VMs to boot
00:12:28.670  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:12:40.869  [2024-12-06 19:11:11.303687] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:12:40.869  [2024-12-06 19:11:11.317743] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:12:40.869  [2024-12-06 19:11:11.321775] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:12:48.977  
00:12:48.977  INFO: VM1 ready
00:12:48.977  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:48.978  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:49.544  INFO: all VMs ready
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@35 -- # vm_exec 1 lsblk
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:12:49.544    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:49.544    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:49.544    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:49.544    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:49.544    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:49.544    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:49.544   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:12:49.802  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:49.802  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:12:49.802  sda       8:0    0     5G  0 disk 
00:12:49.802  ├─sda1    8:1    0     1M  0 part 
00:12:49.802  ├─sda2    8:2    0  1000M  0 part /boot
00:12:49.802  ├─sda3    8:3    0   100M  0 part /boot/efi
00:12:49.802  ├─sda4    8:4    0     4M  0 part 
00:12:49.802  └─sda5    8:5    0   3.9G  0 part /home
00:12:49.802                                    /
00:12:49.802  zram0   252:0    0   946M  0 disk [SWAP]
00:12:49.802  nvme0n1 259:1    0 931.5G  0 disk 
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@37 -- # vm_shutdown_all
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:49.802    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:12:49.802    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:12:49.802    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:12:49.802    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:49.802    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:12:49.802    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:49.802    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=538963
00:12:49.802   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 538963
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:49.803  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:12:49.803    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:49.803    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:49.803    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:49.803    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:49.803    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:49.803    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:49.803   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:49.803  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:50.061  INFO: VM1 is shutting down - wait a while to complete
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:50.061  INFO: Waiting for VMs to shutdown...
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:50.061    19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=538963
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 538963
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:50.061   19:11:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:50.998    19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=538963
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 538963
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:50.998   19:11:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:50.998  [2024-12-06 19:11:21.907828] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:12:51.929   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:51.929   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:51.929   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:51.929   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:51.929   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:51.929   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:51.929   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:51.930   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:51.930   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:51.930   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:51.930   19:11:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:12:53.303  INFO: All VMs successfully shut down
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@40 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:53.303  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:12:53.303  INFO: Creating new VM in /root/vhost_test/vms/1
00:12:53.303  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:12:53.303  INFO: TASK MASK: 6-7
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:12:53.303  INFO: NUMA NODE: 0
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:12:53.303  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:12:53.303  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:12:53.303    19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@41 -- # vm_run 1
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:53.303   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:12:53.304  INFO: running /root/vhost_test/vms/1/run.sh
00:12:53.304   19:11:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:12:53.304  Running VM in /root/vhost_test/vms/1
00:12:53.562  Waiting for QEMU pid file
00:12:53.562  [2024-12-06 19:11:24.480625] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:12:54.494  === qemu.log ===
00:12:54.494  === qemu.log ===
00:12:54.494   19:11:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@42 -- # vm_wait_for_boot 60 1
00:12:54.494   19:11:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:12:54.494   19:11:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:12:54.494   19:11:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:12:54.494   19:11:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:12:54.494   19:11:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:54.494  INFO: Waiting for VMs to boot
00:12:54.494  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:13:06.683  [2024-12-06 19:11:37.292925] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:13:06.683  [2024-12-06 19:11:37.301957] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:13:06.683  [2024-12-06 19:11:37.305978] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:13:16.649  
00:13:16.649  INFO: VM1 ready
00:13:16.649  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:16.649  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:16.906  INFO: all VMs ready
00:13:16.906   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:13:16.906   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@44 -- # vm_exec 1 lsblk
00:13:16.906   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:16.906   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:16.906   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:16.906   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:16.906   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:13:16.906    19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:16.906    19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:16.906    19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:16.906    19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:16.907    19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:16.907    19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:16.907   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:13:16.907  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:17.164  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:13:17.164  sda       8:0    0     5G  0 disk 
00:13:17.164  ├─sda1    8:1    0     1M  0 part 
00:13:17.164  ├─sda2    8:2    0  1000M  0 part /boot
00:13:17.164  ├─sda3    8:3    0   100M  0 part /boot/efi
00:13:17.164  ├─sda4    8:4    0     4M  0 part 
00:13:17.164  └─sda5    8:5    0   3.9G  0 part /home
00:13:17.164                                    /
00:13:17.164  zram0   252:0    0   946M  0 disk [SWAP]
00:13:17.164  nvme0n1 259:1    0 931.5G  0 disk 
00:13:17.164   19:11:47 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_ns nqn.2019-07.io.spdk:cnode1 1
00:13:17.421   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@53 -- # vm_exec 1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:13:17.679  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@55 -- # vm_shutdown_all
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:13:17.679    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:17.679   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=542096
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 542096
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:13:17.937  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:13:17.937  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:13:17.937  INFO: VM1 is shutting down - wait a while to complete
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:13:17.937  INFO: Waiting for VMs to shutdown...
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:13:17.937    19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=542096
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 542096
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:13:17.937   19:11:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:13:18.869    19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:13:18.869   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=542096
00:13:19.126   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 542096
00:13:19.126   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:13:19.126   19:11:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:13:20.060   19:11:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:13:20.994  INFO: All VMs successfully shut down
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:13:20.994   19:11:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@58 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@60 -- # vhosttestfini
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@1 -- # clean_vfio_user
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@6 -- # vm_kill_all
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@476 -- # local vm
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # vm_list_all
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@478 -- # vm_kill 1
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@446 -- # return 0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@7 -- # vhost_kill 0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:13:22.893    19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # vhost_pid=538253
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 538253) app'
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 538253) app'
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 538253) app'
00:13:22.893  INFO: killing vhost (PID 538253) app
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@224 -- # kill -INT 538253
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:22.893  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 538253
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@228 -- # echo .
00:13:22.893  .
00:13:22.893   19:11:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 538253
00:13:23.829  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (538253) - No such process
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@231 -- # break
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@234 -- # kill -0 538253
00:13:23.829  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (538253) - No such process
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@239 -- # kill -0 538253
00:13:23.829  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (538253) - No such process
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@245 -- # is_pid_child 538253
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1686 -- # local pid=538253 _pid
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:13:23.829    19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@261 -- # return 0
00:13:23.829  
00:13:23.829  real	1m2.466s
00:13:23.829  user	4m4.113s
00:13:23.829  sys	0m2.066s
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:23.829   19:11:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:23.829  ************************************
00:13:23.829  END TEST vfio_user_nvme_restart_vm
00:13:23.829  ************************************
00:13:24.089   19:11:54 vfio_user_qemu -- vfio_user/vfio_user.sh@17 -- # run_test vfio_user_virtio_blk_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:13:24.089   19:11:54 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:24.089   19:11:54 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:24.089   19:11:54 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:24.089  ************************************
00:13:24.089  START TEST vfio_user_virtio_blk_restart_vm
00:13:24.089  ************************************
00:13:24.089   19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:13:24.089  * Looking for test storage...
00:13:24.089  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:24.089     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:13:24.089     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:13:24.089    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@345 -- # : 1
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=1
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 1
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=2
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 2
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # return 0
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:24.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:24.090  		--rc genhtml_branch_coverage=1
00:13:24.090  		--rc genhtml_function_coverage=1
00:13:24.090  		--rc genhtml_legend=1
00:13:24.090  		--rc geninfo_all_blocks=1
00:13:24.090  		--rc geninfo_unexecuted_blocks=1
00:13:24.090  		
00:13:24.090  		'
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:24.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:24.090  		--rc genhtml_branch_coverage=1
00:13:24.090  		--rc genhtml_function_coverage=1
00:13:24.090  		--rc genhtml_legend=1
00:13:24.090  		--rc geninfo_all_blocks=1
00:13:24.090  		--rc geninfo_unexecuted_blocks=1
00:13:24.090  		
00:13:24.090  		'
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:24.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:24.090  		--rc genhtml_branch_coverage=1
00:13:24.090  		--rc genhtml_function_coverage=1
00:13:24.090  		--rc genhtml_legend=1
00:13:24.090  		--rc geninfo_all_blocks=1
00:13:24.090  		--rc geninfo_unexecuted_blocks=1
00:13:24.090  		
00:13:24.090  		'
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:24.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:24.090  		--rc genhtml_branch_coverage=1
00:13:24.090  		--rc genhtml_function_coverage=1
00:13:24.090  		--rc genhtml_legend=1
00:13:24.090  		--rc geninfo_all_blocks=1
00:13:24.090  		--rc geninfo_unexecuted_blocks=1
00:13:24.090  		
00:13:24.090  		'
00:13:24.090   19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:13:24.090    19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@6 -- # : false
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:13:24.090      19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:13:24.090       19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:13:24.090      19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:13:24.090     19:11:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:13:24.090     19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:13:24.090     19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:13:24.090      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:13:24.090      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:13:24.090      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:13:24.090      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:13:24.090      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:13:24.090      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:13:24.090      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:13:24.091     19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:13:24.091      19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:13:24.091       19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:13:24.091        19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:13:24.091        19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:13:24.091        19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:13:24.091        19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:13:24.091       19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:24.091   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:13:24.091   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:13:24.091   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:13:24.091    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:13:24.091     19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:13:24.091     19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_blk
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_blk != virtio_blk ]]
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:24.366    19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@17 -- # vfupid=545856
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@18 -- # echo 545856
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 545856'
00:13:24.366  Process pid: 545856
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:13:24.366  waiting for app to run...
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@22 -- # waitforlisten 545856 /root/vhost_test/vhost/0/rpc.sock
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 545856 ']'
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:13:24.366  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:24.366   19:11:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:24.366  [2024-12-06 19:11:55.208462] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:13:24.366  [2024-12-06 19:11:55.208637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545856 ]
00:13:24.366  EAL: No free 2048 kB hugepages reported on node 1
00:13:24.933  [2024-12-06 19:11:55.589263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:24.933  [2024-12-06 19:11:55.703461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:24.933  [2024-12-06 19:11:55.703525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:24.933  [2024-12-06 19:11:55.703564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:24.933  [2024-12-06 19:11:55.703574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:13:25.896   19:11:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0b:00.0
00:13:29.227  Nvme0n1
00:13:29.227   19:11:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:13:29.227   19:11:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:13:29.227   19:11:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:13:29.227   19:11:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:13:29.228   19:11:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_blk_endpoint virtio.1 --bdev-name Nvme0n1 --num-queues=2 --qsize=512 --packed-ring
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:29.486  INFO: Creating new VM in /root/vhost_test/vms/1
00:13:29.486  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:13:29.486  INFO: TASK MASK: 6-7
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:13:29.486  INFO: NUMA NODE: 0
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:29.486  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:13:29.486   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:13:29.487  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:13:29.487    19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:13:29.487  INFO: running /root/vhost_test/vms/1/run.sh
00:13:29.487   19:12:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:13:29.487  Running VM in /root/vhost_test/vms/1
00:13:29.744  [2024-12-06 19:12:00.590494] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:13:29.744  Waiting for QEMU pid file
00:13:31.112  === qemu.log ===
00:13:31.112  === qemu.log ===
00:13:31.112   19:12:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:13:31.112   19:12:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:13:31.112   19:12:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:13:31.112   19:12:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:13:31.112   19:12:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:13:31.112   19:12:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:31.112  INFO: Waiting for VMs to boot
00:13:31.112  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:13:53.019  
00:13:53.019  INFO: VM1 ready
00:13:53.020  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:53.020  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:53.586  INFO: all VMs ready
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:13:53.586    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:53.586    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:53.586    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.586    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:53.586    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:53.586    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:13:53.586  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@993 -- # shift 1
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:13:53.586  INFO: Starting fio server on VM1
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:53.586   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:53.587   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:13:53.587    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:53.587    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:53.587    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.587    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:53.587    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:53.587    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:53.587   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:13:53.587  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:53.845   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:13:53.845   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:53.845   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.845   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:53.845   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:53.845   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:13:53.845    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:53.845    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:53.845    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.845    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:53.845    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:53.845    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:53.845   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:13:53.845  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:54.104   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:13:54.104   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_blk 1
00:13:54.104   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:13:54.104   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:13:54.104   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:13:54.104   19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:13:54.104     19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:54.104     19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:54.104     19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.104     19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:54.104     19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:54.104     19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:54.104    19:12:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:13:54.104  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=vda
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s vda
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/vda'
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/vda
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1053 -- # local arg
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # local vms
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1057 -- # local out=
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1058 -- # local vm
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/vda
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/vda@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:54.373    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:54.373   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:13:54.374  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1121 -- # false
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:13:54.374    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:54.374    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:54.374    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.374    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:54.374    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:54.374    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:54.374   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:13:54.374  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:54.640  [global]
00:13:54.640  blocksize_range=4k-512k
00:13:54.640  iodepth=512
00:13:54.640  iodepth_batch=128
00:13:54.640  iodepth_low=256
00:13:54.640  ioengine=libaio
00:13:54.640  size=1G
00:13:54.640  io_size=4G
00:13:54.640  filename=/dev/vda
00:13:54.640  group_reporting
00:13:54.640  thread
00:13:54.640  numjobs=1
00:13:54.640  direct=1
00:13:54.640  rw=randwrite
00:13:54.640  do_verify=1
00:13:54.640  verify=md5
00:13:54.640  verify_backlog=1024
00:13:54.640  fsync_on_close=1
00:13:54.641  verify_state_save=0
00:13:54.641  [nvme-host]
00:13:54.641   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1127 -- # true
00:13:54.641    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:13:54.641    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:13:54.641    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.641    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:54.641    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:13:54.641    19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:13:54.641   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:13:54.641   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1131 -- # true
00:13:54.641   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1147 -- # true
00:13:54.641   19:12:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:14:06.834   19:12:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:14:06.834  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:14:06.834  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:14:06.834  <VM-1-6-7> Starting 1 thread
00:14:06.834  <VM-1-6-7> 
00:14:06.834  nvme-host: (groupid=0, jobs=1): err= 0: pid=939: Fri Dec  6 19:12:36 2024
00:14:06.834    read: IOPS=1313, BW=220MiB/s (231MB/s)(2048MiB/9295msec)
00:14:06.834      slat (usec): min=46, max=18577, avg=2367.95, stdev=3624.49
00:14:06.834      clat (msec): min=6, max=346, avg=132.90, stdev=73.26
00:14:06.834       lat (msec): min=7, max=347, avg=135.27, stdev=72.81
00:14:06.834      clat percentiles (msec):
00:14:06.834       |  1.00th=[   12],  5.00th=[   19], 10.00th=[   42], 20.00th=[   73],
00:14:06.834       | 30.00th=[   87], 40.00th=[  106], 50.00th=[  125], 60.00th=[  142],
00:14:06.834       | 70.00th=[  167], 80.00th=[  197], 90.00th=[  236], 95.00th=[  271],
00:14:06.834       | 99.00th=[  317], 99.50th=[  330], 99.90th=[  342], 99.95th=[  342],
00:14:06.834       | 99.99th=[  347]
00:14:06.834    write: IOPS=1396, BW=234MiB/s (246MB/s)(2048MiB/8741msec); 0 zone resets
00:14:06.834      slat (usec): min=250, max=73734, avg=22020.00, stdev=14988.12
00:14:06.834      clat (msec): min=6, max=302, avg=122.40, stdev=66.43
00:14:06.834       lat (msec): min=7, max=338, avg=144.42, stdev=69.33
00:14:06.834      clat percentiles (msec):
00:14:06.834       |  1.00th=[    8],  5.00th=[   20], 10.00th=[   31], 20.00th=[   67],
00:14:06.834       | 30.00th=[   83], 40.00th=[   99], 50.00th=[  117], 60.00th=[  136],
00:14:06.834       | 70.00th=[  155], 80.00th=[  178], 90.00th=[  213], 95.00th=[  241],
00:14:06.834       | 99.00th=[  284], 99.50th=[  305], 99.90th=[  305], 99.95th=[  305],
00:14:06.834       | 99.99th=[  305]
00:14:06.834     bw (  KiB/s): min= 3384, max=364920, per=94.51%, avg=226744.00, stdev=107683.98, samples=18
00:14:06.834     iops        : min=   22, max= 2048, avg=1304.00, stdev=676.47, samples=18
00:14:06.834    lat (msec)   : 10=0.79%, 20=4.34%, 50=8.69%, 100=25.29%, 250=55.43%
00:14:06.834    lat (msec)   : 500=5.47%
00:14:06.834    cpu          : usr=93.83%, sys=1.95%, ctx=433, majf=0, minf=34
00:14:06.834    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:14:06.834       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:14:06.834       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:06.834       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:06.834       latency   : target=0, window=0, percentile=100.00%, depth=512
00:14:06.834  
00:14:06.834  Run status group 0 (all jobs):
00:14:06.834     READ: bw=220MiB/s (231MB/s), 220MiB/s-220MiB/s (231MB/s-231MB/s), io=2048MiB (2147MB), run=9295-9295msec
00:14:06.834    WRITE: bw=234MiB/s (246MB/s), 234MiB/s-234MiB/s (246MB/s-246MB/s), io=2048MiB (2147MB), run=8741-8741msec
00:14:06.834  
00:14:06.834  Disk stats (read/write):
00:14:06.834    vda: ios=12311/12141, merge=71/72, ticks=141302/100421, in_queue=241724, util=28.32%
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:14:06.834  INFO: Shutting down virtual machine...
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:14:06.834   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:14:06.834    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:14:06.834    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=546619
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 546619
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:14:06.835  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:06.835  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:14:06.835  INFO: VM1 is shutting down - wait a while to complete
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:14:06.835  INFO: Waiting for VMs to shutdown...
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:14:06.835    19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=546619
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 546619
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:14:06.835   19:12:37 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:14:07.400    19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=546619
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 546619
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:14:07.400   19:12:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:14:08.774   19:12:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:14:09.709  INFO: All VMs successfully shut down
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:09.709  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:14:09.709  INFO: Creating new VM in /root/vhost_test/vms/1
00:14:09.709  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:14:09.709  INFO: TASK MASK: 6-7
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:14:09.709  INFO: NUMA NODE: 0
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:14:09.709   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:14:09.710  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:14:09.710  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:14:09.710    19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:14:09.710  INFO: running /root/vhost_test/vms/1/run.sh
00:14:09.710   19:12:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:14:09.710  Running VM in /root/vhost_test/vms/1
00:14:09.976  [2024-12-06 19:12:40.666070] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:14:09.976  Waiting for QEMU pid file
00:14:10.908  === qemu.log ===
00:14:10.908  === qemu.log ===
00:14:10.908   19:12:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:14:10.908   19:12:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:14:10.908   19:12:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:14:10.908   19:12:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:14:10.908   19:12:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:14:10.908   19:12:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:10.908  INFO: Waiting for VMs to boot
00:14:10.908  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:14:32.852  
00:14:32.852  INFO: VM1 ready
00:14:32.852  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:32.852  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:33.110  INFO: all VMs ready
00:14:33.110   19:13:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:14:33.110   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:14:33.110   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_blk 1
00:14:33.110   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:14:33.110   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:14:33.110   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:14:33.111   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:14:33.111     19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:33.111     19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:33.111     19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:33.111     19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:33.111     19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:33.111     19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:33.111    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:14:33.111  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=vda
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[ vda != \v\d\a ]]
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:14:33.369  INFO: Shutting down virtual machine...
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:14:33.369    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:14:33.369    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:14:33.369    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:14:33.369    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:33.369    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:33.369    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:14:33.369   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:14:33.370    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=551990
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 551990
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:14:33.370  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:14:33.370    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:33.370    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:33.370    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:33.370    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:33.370    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:33.370    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:33.370   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:33.370  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:33.629  Connection to 127.0.0.1 closed by remote host.
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # true
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:14:33.629  INFO: VM1 is shutting down - wait a while to complete
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:14:33.629  INFO: Waiting for VMs to shutdown...
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:14:33.629    19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=551990
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 551990
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:14:33.629   19:13:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:14:34.561    19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=551990
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 551990
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:14:34.561   19:13:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:14:35.490   19:13:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:14:36.862  INFO: All VMs successfully shut down
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:14:36.862   19:13:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:14:36.862  [2024-12-06 19:13:07.651464] vfu_virtio_blk.c: 384:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:14:38.242    19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:14:38.242    19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:14:38.242    19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:14:38.242    19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:14:38.242    19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # vhost_pid=545856
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 545856) app'
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 545856) app'
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 545856) app'
00:14:38.242  INFO: killing vhost (PID 545856) app
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@224 -- # kill -INT 545856
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:38.242  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 545856
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:14:38.242  .
00:14:38.242   19:13:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:14:39.177   19:13:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:14:39.177   19:13:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:39.177   19:13:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 545856
00:14:39.177   19:13:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:14:39.177  .
00:14:39.177   19:13:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:14:40.112   19:13:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:14:40.112   19:13:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:40.112   19:13:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 545856
00:14:40.112  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (545856) - No such process
00:14:40.112   19:13:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@231 -- # break
00:14:40.112   19:13:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@234 -- # kill -0 545856
00:14:40.112  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (545856) - No such process
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@239 -- # kill -0 545856
00:14:40.112  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (545856) - No such process
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@245 -- # is_pid_child 545856
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1686 -- # local pid=545856 _pid
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:14:40.112    19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@261 -- # return 0
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:14:40.112  
00:14:40.112  real	1m16.209s
00:14:40.112  user	4m57.601s
00:14:40.112  sys	0m2.297s
00:14:40.112   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:40.113   19:13:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:40.113  ************************************
00:14:40.113  END TEST vfio_user_virtio_blk_restart_vm
00:14:40.113  ************************************
00:14:40.113   19:13:11 vfio_user_qemu -- vfio_user/vfio_user.sh@18 -- # run_test vfio_user_virtio_scsi_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:14:40.113   19:13:11 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:40.113   19:13:11 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:40.113   19:13:11 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:40.372  ************************************
00:14:40.372  START TEST vfio_user_virtio_scsi_restart_vm
00:14:40.372  ************************************
00:14:40.372   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:14:40.372  * Looking for test storage...
00:14:40.372  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@345 -- # : 1
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=1
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 1
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=2
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 2
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # return 0
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:40.372  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.372  		--rc genhtml_branch_coverage=1
00:14:40.372  		--rc genhtml_function_coverage=1
00:14:40.372  		--rc genhtml_legend=1
00:14:40.372  		--rc geninfo_all_blocks=1
00:14:40.372  		--rc geninfo_unexecuted_blocks=1
00:14:40.372  		
00:14:40.372  		'
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:40.372  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.372  		--rc genhtml_branch_coverage=1
00:14:40.372  		--rc genhtml_function_coverage=1
00:14:40.372  		--rc genhtml_legend=1
00:14:40.372  		--rc geninfo_all_blocks=1
00:14:40.372  		--rc geninfo_unexecuted_blocks=1
00:14:40.372  		
00:14:40.372  		'
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:40.372  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.372  		--rc genhtml_branch_coverage=1
00:14:40.372  		--rc genhtml_function_coverage=1
00:14:40.372  		--rc genhtml_legend=1
00:14:40.372  		--rc geninfo_all_blocks=1
00:14:40.372  		--rc geninfo_unexecuted_blocks=1
00:14:40.372  		
00:14:40.372  		'
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:40.372  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.372  		--rc genhtml_branch_coverage=1
00:14:40.372  		--rc genhtml_function_coverage=1
00:14:40.372  		--rc genhtml_legend=1
00:14:40.372  		--rc geninfo_all_blocks=1
00:14:40.372  		--rc geninfo_unexecuted_blocks=1
00:14:40.372  		
00:14:40.372  		'
00:14:40.372   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:14:40.372    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@6 -- # : false
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:14:40.372       19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:14:40.372     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:14:40.372      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:14:40.373     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:14:40.373      19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:14:40.373       19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:14:40.373        19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:14:40.373        19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:14:40.373        19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:14:40.373        19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:14:40.373       19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:40.373   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:14:40.373   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:14:40.373   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:14:40.373    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:40.373     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:14:40.373     19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_scsi
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_blk ]]
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_scsi ]]
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:14:40.632    19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@17 -- # vfupid=555736
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:14:40.632   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@18 -- # echo 555736
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 555736'
00:14:40.633  Process pid: 555736
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:14:40.633  waiting for app to run...
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@22 -- # waitforlisten 555736 /root/vhost_test/vhost/0/rpc.sock
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 555736 ']'
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:14:40.633  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:40.633   19:13:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:40.633  [2024-12-06 19:13:11.476218] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:14:40.633  [2024-12-06 19:13:11.476372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555736 ]
00:14:40.633  EAL: No free 2048 kB hugepages reported on node 1
00:14:41.200  [2024-12-06 19:13:11.863633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:14:41.200  [2024-12-06 19:13:11.978168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:41.200  [2024-12-06 19:13:11.978273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:14:41.200  [2024-12-06 19:13:11.978317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:14:41.200  [2024-12-06 19:13:11.978306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:14:42.136   19:13:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0b:00.0
00:14:45.420  Nvme0n1
00:14:45.421   19:13:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:14:45.421   19:13:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:14:45.421   19:13:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:14:45.421   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\b\l\k ]]
00:14:45.421   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@48 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:14:45.421   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_scsi_endpoint virtio.1 --num-io-queues=2 --qsize=512 --packed-ring
00:14:45.679   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_scsi_add_target virtio.1 --scsi-target-num=0 --bdev-name Nvme0n1
00:14:45.939  [2024-12-06 19:13:16.775075] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: virtio.1: added SCSI target 0 using bdev 'Nvme0n1'
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:45.939  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:14:45.939  INFO: Creating new VM in /root/vhost_test/vms/1
00:14:45.939  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:14:45.939  INFO: TASK MASK: 6-7
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:14:45.939  INFO: NUMA NODE: 0
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:14:45.939   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:14:45.940  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:14:45.940  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:14:45.940    19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:14:45.940  INFO: running /root/vhost_test/vms/1/run.sh
00:14:45.940   19:13:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:14:45.940  Running VM in /root/vhost_test/vms/1
00:14:46.505  [2024-12-06 19:13:17.250655] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:14:46.505  Waiting for QEMU pid file
00:14:47.438  === qemu.log ===
00:14:47.438  === qemu.log ===
00:14:47.438   19:13:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:14:47.438   19:13:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:14:47.438   19:13:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:14:47.438   19:13:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:14:47.438   19:13:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:14:47.438   19:13:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:14:47.438  INFO: Waiting for VMs to boot
00:14:47.438  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:15:02.300  [2024-12-06 19:13:30.813602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:07.565  
00:15:07.565  INFO: VM1 ready
00:15:07.565  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:07.825  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:08.766  INFO: all VMs ready
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:15:08.766  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@993 -- # shift 1
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:15:08.766  INFO: Starting fio server on VM1
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:08.766    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:08.766   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:15:08.766  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:09.026   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:15:09.026   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:09.026   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.026   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.026   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:09.026   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:09.026    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:09.026    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:09.026    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.026    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.026    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:09.026    19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:09.026   19:13:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:15:09.026  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_scsi 1
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:15:09.286  	for entry in /sys/block/sd*; do
00:15:09.286  		disk_type="$(cat $entry/device/vendor)";
00:15:09.286  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:15:09.286  			fname=$(basename $entry);
00:15:09.286  			echo -n " $fname";
00:15:09.286  		fi;
00:15:09.286  	done'
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:15:09.286  	for entry in /sys/block/sd*; do
00:15:09.286  		disk_type="$(cat $entry/device/vendor)";
00:15:09.286  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:15:09.286  			fname=$(basename $entry);
00:15:09.286  			echo -n " $fname";
00:15:09.286  		fi;
00:15:09.286  	done'
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:09.286     19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:09.286     19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:09.286     19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.286     19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.286     19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:09.286     19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:15:09.286  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=' sdb'
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s sdb
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/sdb'
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/sdb
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1053 -- # local arg
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # local vms
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1057 -- # local out=
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1058 -- # local vm
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:15:09.286    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:15:09.286   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/sdb
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/sdb@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:09.287    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:09.287    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:09.287    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.287    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.287    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:09.287    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:09.287   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:15:09.547  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1121 -- # false
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:09.547    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:09.547    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:09.547    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.547    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.547    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:09.547    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:09.547   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:15:09.547  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:09.806  [global]
00:15:09.806  blocksize_range=4k-512k
00:15:09.806  iodepth=512
00:15:09.806  iodepth_batch=128
00:15:09.806  iodepth_low=256
00:15:09.806  ioengine=libaio
00:15:09.806  size=1G
00:15:09.806  io_size=4G
00:15:09.806  filename=/dev/sdb
00:15:09.806  group_reporting
00:15:09.806  thread
00:15:09.806  numjobs=1
00:15:09.806  direct=1
00:15:09.806  rw=randwrite
00:15:09.806  do_verify=1
00:15:09.806  verify=md5
00:15:09.806  verify_backlog=1024
00:15:09.806  fsync_on_close=1
00:15:09.806  verify_state_save=0
00:15:09.806  [nvme-host]
00:15:09.807   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1127 -- # true
00:15:09.807    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:15:09.807    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:15:09.807    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.807    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:09.807    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:15:09.807    19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:15:09.807   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:15:09.807   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1131 -- # true
00:15:09.807   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1147 -- # true
00:15:09.807   19:13:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:15:10.748  [2024-12-06 19:13:41.614323] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:16.170  [2024-12-06 19:13:46.302069] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:16.170  [2024-12-06 19:13:46.564942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:20.371  [2024-12-06 19:13:51.014058] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:20.371  [2024-12-06 19:13:51.035716] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:20.371  [2024-12-06 19:13:51.290877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:20.628   19:13:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:15:21.567  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:15:21.567  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:15:21.567  <VM-1-6-7> Starting 1 thread
00:15:21.567  <VM-1-6-7> 
00:15:21.567  nvme-host: (groupid=0, jobs=1): err= 0: pid=950: Fri Dec  6 19:13:51 2024
00:15:21.567    read: IOPS=1297, BW=218MiB/s (228MB/s)(2048MiB/9410msec)
00:15:21.567      slat (usec): min=63, max=19316, avg=2992.42, stdev=3874.47
00:15:21.567      clat (msec): min=5, max=346, avg=135.69, stdev=73.41
00:15:21.567       lat (msec): min=5, max=348, avg=138.68, stdev=73.20
00:15:21.567      clat percentiles (msec):
00:15:21.567       |  1.00th=[   13],  5.00th=[   22], 10.00th=[   44], 20.00th=[   75],
00:15:21.567       | 30.00th=[   90], 40.00th=[  110], 50.00th=[  129], 60.00th=[  148],
00:15:21.567       | 70.00th=[  171], 80.00th=[  201], 90.00th=[  239], 95.00th=[  271],
00:15:21.567       | 99.00th=[  321], 99.50th=[  330], 99.90th=[  342], 99.95th=[  342],
00:15:21.567       | 99.99th=[  347]
00:15:21.567    write: IOPS=1380, BW=232MiB/s (243MB/s)(2048MiB/8844msec); 0 zone resets
00:15:21.567      slat (usec): min=368, max=76449, avg=22387.47, stdev=15027.02
00:15:21.567      clat (msec): min=4, max=307, avg=122.73, stdev=67.35
00:15:21.567       lat (msec): min=5, max=339, avg=145.12, stdev=70.32
00:15:21.567      clat percentiles (msec):
00:15:21.567       |  1.00th=[    7],  5.00th=[   18], 10.00th=[   29], 20.00th=[   68],
00:15:21.567       | 30.00th=[   83], 40.00th=[   99], 50.00th=[  120], 60.00th=[  136],
00:15:21.567       | 70.00th=[  155], 80.00th=[  178], 90.00th=[  218], 95.00th=[  245],
00:15:21.567       | 99.00th=[  284], 99.50th=[  309], 99.90th=[  309], 99.95th=[  309],
00:15:21.567       | 99.99th=[  309]
00:15:21.567     bw (  KiB/s): min=90792, max=364920, per=94.18%, avg=223337.78, stdev=79795.24, samples=18
00:15:21.567     iops        : min=  512, max= 2048, avg=1275.56, stdev=527.64, samples=18
00:15:21.567    lat (msec)   : 10=1.15%, 20=4.71%, 50=7.87%, 100=24.60%, 250=56.07%
00:15:21.567    lat (msec)   : 500=5.59%
00:15:21.567    cpu          : usr=92.67%, sys=2.56%, ctx=483, majf=0, minf=35
00:15:21.567    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:15:21.567       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:15:21.567       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:15:21.567       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:21.567       latency   : target=0, window=0, percentile=100.00%, depth=512
00:15:21.567  
00:15:21.567  Run status group 0 (all jobs):
00:15:21.567     READ: bw=218MiB/s (228MB/s), 218MiB/s-218MiB/s (228MB/s-228MB/s), io=2048MiB (2147MB), run=9410-9410msec
00:15:21.567    WRITE: bw=232MiB/s (243MB/s), 232MiB/s-232MiB/s (243MB/s-243MB/s), io=2048MiB (2147MB), run=8844-8844msec
00:15:21.567  
00:15:21.567  Disk stats (read/write):
00:15:21.567    sdb: ios=12244/12285, merge=63/88, ticks=147177/99907, in_queue=247085, util=28.31%
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:15:21.567  INFO: Shutting down virtual machine...
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:15:21.567   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:15:21.567    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:15:21.567    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:15:21.567    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:15:21.567    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=556451
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 556451
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:15:21.568  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:21.568    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:21.568   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:15:21.568  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:15:21.826  INFO: VM1 is shutting down - wait a while to complete
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:15:21.826  INFO: Waiting for VMs to shutdown...
00:15:21.826   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:15:21.827    19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=556451
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 556451
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:15:21.827   19:13:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:15:22.768   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:15:22.769    19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=556451
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 556451
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:15:22.769   19:13:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:15:23.706   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:15:23.706   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:15:23.706   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:15:23.706   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:23.706   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:23.706   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:23.707   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:23.707   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:23.707   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:15:23.707   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:15:23.707   19:13:54 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:15:24.645  INFO: All VMs successfully shut down
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:15:24.645   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:15:24.905  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:15:24.905  INFO: Creating new VM in /root/vhost_test/vms/1
00:15:24.905  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:15:24.905  INFO: TASK MASK: 6-7
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:15:24.905  INFO: NUMA NODE: 0
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:15:24.905  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:15:24.905  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:15:24.905    19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:24.905   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:15:24.906  INFO: running /root/vhost_test/vms/1/run.sh
00:15:24.906   19:13:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:15:24.906  Running VM in /root/vhost_test/vms/1
00:15:25.166  [2024-12-06 19:13:56.058444] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:15:25.426  Waiting for QEMU pid file
00:15:26.366  === qemu.log ===
00:15:26.366  === qemu.log ===
00:15:26.366   19:13:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:15:26.366   19:13:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:15:26.366   19:13:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:15:26.366   19:13:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:15:26.366   19:13:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:15:26.366   19:13:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:15:26.366  INFO: Waiting for VMs to boot
00:15:26.366  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:15:41.266  [2024-12-06 19:14:09.676189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:15:46.545  
00:15:46.545  INFO: VM1 ready
00:15:46.545  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:46.806  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:47.746  INFO: all VMs ready
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_scsi 1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:15:47.746  	for entry in /sys/block/sd*; do
00:15:47.746  		disk_type="$(cat $entry/device/vendor)";
00:15:47.746  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:15:47.746  			fname=$(basename $entry);
00:15:47.746  			echo -n " $fname";
00:15:47.746  		fi;
00:15:47.746  	done'
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:15:47.746  	for entry in /sys/block/sd*; do
00:15:47.746  		disk_type="$(cat $entry/device/vendor)";
00:15:47.746  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:15:47.746  			fname=$(basename $entry);
00:15:47.746  			echo -n " $fname";
00:15:47.746  		fi;
00:15:47.746  	done'
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:47.746     19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:47.746     19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:47.746     19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:47.746     19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:47.746     19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:47.746     19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:15:47.746  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=' sdb'
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[  sdb != \ \s\d\b ]]
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:15:47.746  INFO: Shutting down virtual machine...
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:15:47.746    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=561118
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 561118
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:15:47.746  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:15:47.746   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:15:47.747   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:15:47.747   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:15:47.747   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:47.747   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:47.747   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:15:47.747   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:15:47.747    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:15:47.747    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:15:47.747    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:47.747    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:47.747    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:15:47.747    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:15:47.747   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:15:48.007  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:15:48.007   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:15:48.007   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:15:48.007   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:15:48.008  INFO: VM1 is shutting down - wait a while to complete
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:15:48.008  INFO: Waiting for VMs to shutdown...
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:15:48.008    19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=561118
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 561118
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:15:48.008   19:14:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:15:48.947    19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=561118
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 561118
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:15:48.947   19:14:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:15:50.327   19:14:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:15:51.263  INFO: All VMs successfully shut down
00:15:51.263   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:15:51.264   19:14:21 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:15:51.264  [2024-12-06 19:14:22.145549] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:15:52.673    19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:15:52.673    19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:15:52.673    19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:15:52.673    19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:15:52.673    19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # vhost_pid=555736
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 555736) app'
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 555736) app'
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:52.673   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 555736) app'
00:15:52.673  INFO: killing vhost (PID 555736) app
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@224 -- # kill -INT 555736
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:15:52.674  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 555736
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:15:52.674  .
00:15:52.674   19:14:23 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:15:53.610   19:14:24 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:15:53.610   19:14:24 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:15:53.610   19:14:24 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 555736
00:15:53.610   19:14:24 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:15:53.610  .
00:15:53.610   19:14:24 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:15:54.550   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:15:54.550   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:15:54.550   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 555736
00:15:54.550  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (555736) - No such process
00:15:54.550   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@231 -- # break
00:15:54.550   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@234 -- # kill -0 555736
00:15:54.550  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (555736) - No such process
00:15:54.550   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@239 -- # kill -0 555736
00:15:54.551  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (555736) - No such process
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@245 -- # is_pid_child 555736
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1686 -- # local pid=555736 _pid
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:15:54.551    19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:15:54.551   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:15:54.809   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:15:54.809   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@261 -- # return 0
00:15:54.809   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:15:54.809   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:15:54.809  
00:15:54.809  real	1m14.438s
00:15:54.809  user	4m50.627s
00:15:54.809  sys	0m2.272s
00:15:54.809   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:54.809   19:14:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:15:54.809  ************************************
00:15:54.809  END TEST vfio_user_virtio_scsi_restart_vm
00:15:54.809  ************************************
00:15:54.809   19:14:25 vfio_user_qemu -- vfio_user/vfio_user.sh@19 -- # run_test vfio_user_virtio_bdevperf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:15:54.809   19:14:25 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:54.809   19:14:25 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:54.809   19:14:25 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:54.809  ************************************
00:15:54.809  START TEST vfio_user_virtio_bdevperf
00:15:54.809  ************************************
00:15:54.809   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:15:54.809  * Looking for test storage...
00:15:54.809  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:54.809     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:15:54.809     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@345 -- # : 1
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:54.809    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:54.809     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:15:54.809     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=1
00:15:54.809     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:54.810     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 1
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:15:54.810     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:15:54.810     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=2
00:15:54.810     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:54.810     19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 2
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # return 0
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:54.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.810  		--rc genhtml_branch_coverage=1
00:15:54.810  		--rc genhtml_function_coverage=1
00:15:54.810  		--rc genhtml_legend=1
00:15:54.810  		--rc geninfo_all_blocks=1
00:15:54.810  		--rc geninfo_unexecuted_blocks=1
00:15:54.810  		
00:15:54.810  		'
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:54.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.810  		--rc genhtml_branch_coverage=1
00:15:54.810  		--rc genhtml_function_coverage=1
00:15:54.810  		--rc genhtml_legend=1
00:15:54.810  		--rc geninfo_all_blocks=1
00:15:54.810  		--rc geninfo_unexecuted_blocks=1
00:15:54.810  		
00:15:54.810  		'
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:54.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.810  		--rc genhtml_branch_coverage=1
00:15:54.810  		--rc genhtml_function_coverage=1
00:15:54.810  		--rc genhtml_legend=1
00:15:54.810  		--rc geninfo_all_blocks=1
00:15:54.810  		--rc geninfo_unexecuted_blocks=1
00:15:54.810  		
00:15:54.810  		'
00:15:54.810    19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:54.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.810  		--rc genhtml_branch_coverage=1
00:15:54.810  		--rc genhtml_function_coverage=1
00:15:54.810  		--rc genhtml_legend=1
00:15:54.810  		--rc geninfo_all_blocks=1
00:15:54.810  		--rc geninfo_unexecuted_blocks=1
00:15:54.810  		
00:15:54.810  		'
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@9 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@11 -- # vfu_dir=/tmp/vfu_devices
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@12 -- # rm -rf /tmp/vfu_devices
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@13 -- # mkdir -p /tmp/vfu_devices
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@17 -- # spdk_tgt_pid=564807
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0xf -L vfu_virtio
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@18 -- # waitforlisten 564807
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 564807 ']'
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:54.810  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:54.810   19:14:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:55.069  [2024-12-06 19:14:25.843456] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:15:55.069  [2024-12-06 19:14:25.843603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564807 ]
00:15:55.069  EAL: No free 2048 kB hugepages reported on node 1
00:15:55.069  [2024-12-06 19:14:25.977448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:55.326  [2024-12-06 19:14:26.101377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:55.326  [2024-12-06 19:14:26.101437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:55.326  [2024-12-06 19:14:26.101481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:55.326  [2024-12-06 19:14:26.101501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:15:56.263   19:14:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:56.263   19:14:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:15:56.263   19:14:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc0 64 512
00:15:56.522  malloc0
00:15:56.522   19:14:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc1 64 512
00:15:56.781  malloc1
00:15:57.042   19:14:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@22 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc2 64 512
00:15:57.301  malloc2
00:15:57.301   19:14:28 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_tgt_set_base_path /tmp/vfu_devices
00:15:57.558   19:14:28 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_blk_endpoint vfu.blk --bdev-name malloc0 --cpumask=0x1 --num-queues=2 --qsize=256 --packed-ring
00:15:57.816  [2024-12-06 19:14:28.577665] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.blk_bar4, devmem_fd 470
00:15:57.816  [2024-12-06 19:14:28.577729] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.blk: get device information, fd 470
00:15:57.816  [2024-12-06 19:14:28.577904] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 0
00:15:57.816  [2024-12-06 19:14:28.577944] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 1
00:15:57.816  [2024-12-06 19:14:28.577961] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 2
00:15:57.816  [2024-12-06 19:14:28.577977] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 3
00:15:57.816   19:14:28 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_scsi_endpoint vfu.scsi --cpumask 0x2 --num-io-queues=2 --qsize=256 --packed-ring
00:15:58.074  [2024-12-06 19:14:28.866778] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.scsi_bar4, devmem_fd 574
00:15:58.074  [2024-12-06 19:14:28.866827] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get device information, fd 574
00:15:58.074  [2024-12-06 19:14:28.866912] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 0
00:15:58.074  [2024-12-06 19:14:28.866936] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 1
00:15:58.074  [2024-12-06 19:14:28.866950] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 2
00:15:58.074  [2024-12-06 19:14:28.866967] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 3
00:15:58.074   19:14:28 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=0 --bdev-name malloc1
00:15:58.332  [2024-12-06 19:14:29.127819] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 0 using bdev 'malloc1'
00:15:58.332   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=1 --bdev-name malloc2
00:15:58.592  [2024-12-06 19:14:29.388875] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 1 using bdev 'malloc2'
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@37 -- # bdevperf=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@38 -- # bdevperf_rpc_sock=/tmp/bdevperf.sock
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@41 -- # bdevperf_pid=565219
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf -r /tmp/bdevperf.sock -g -s 2048 -q 256 -o 4096 -w randrw -M 50 -t 30 -m 0xf0 -L vfio_pci -L virtio_vfio_user
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@42 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@43 -- # waitforlisten 565219 /tmp/bdevperf.sock
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 565219 ']'
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/bdevperf.sock
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...'
00:15:58.592  Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:58.592   19:14:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:58.592  [2024-12-06 19:14:29.505631] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:15:58.592  [2024-12-06 19:14:29.505769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf0 -m 2048 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565219 ]
00:15:58.850  EAL: No free 2048 kB hugepages reported on node 1
00:15:59.786  [2024-12-06 19:14:30.453745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:59.786  [2024-12-06 19:14:30.586581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:15:59.786  [2024-12-06 19:14:30.586634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:15:59.786  [2024-12-06 19:14:30.586679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:15:59.786  [2024-12-06 19:14:30.586685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:16:00.351   19:14:31 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:00.351   19:14:31 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:16:00.351   19:14:31 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type scsi --trtype vfio-user --traddr /tmp/vfu_devices/vfu.scsi VirtioScsi0
00:16:00.611  [2024-12-06 19:14:31.496181] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.scsi: attached successfully
00:16:00.611  [2024-12-06 19:14:31.498374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.499343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.500359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.501353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.502394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:16:00.611  [2024-12-06 19:14:31.502454] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f4f94305000
00:16:00.611  [2024-12-06 19:14:31.503372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.504373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.505392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.506392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.507398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:00.611  [2024-12-06 19:14:31.508920] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:16:00.611  [2024-12-06 19:14:31.518085] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /tmp/vfu_devices/vfu.scsi Setup Successfully
00:16:00.611  [2024-12-06 19:14:31.519504] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x4
00:16:00.611  [2024-12-06 19:14:31.520468] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x2000-0x2003, len = 4
00:16:00.611  [2024-12-06 19:14:31.520531] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:16:00.611  [2024-12-06 19:14:31.521471] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.521504] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:16:00.611  [2024-12-06 19:14:31.521520] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:16:00.611  [2024-12-06 19:14:31.521536] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:16:00.611  [2024-12-06 19:14:31.522474] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.522499] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:16:00.611  [2024-12-06 19:14:31.522538] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:16:00.611  [2024-12-06 19:14:31.523487] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.523511] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:16:00.611  [2024-12-06 19:14:31.523554] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:16:00.611  [2024-12-06 19:14:31.523583] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:16:00.611  [2024-12-06 19:14:31.524493] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.524516] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x1
00:16:00.611  [2024-12-06 19:14:31.524529] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:16:00.611  [2024-12-06 19:14:31.525507] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.525526] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:16:00.611  [2024-12-06 19:14:31.525569] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:16:00.611  [2024-12-06 19:14:31.526509] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.526528] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:16:00.611  [2024-12-06 19:14:31.526565] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:16:00.611  [2024-12-06 19:14:31.526600] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:16:00.611  [2024-12-06 19:14:31.527513] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.527532] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x3
00:16:00.611  [2024-12-06 19:14:31.527547] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:16:00.611  [2024-12-06 19:14:31.528513] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.528536] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:16:00.611  [2024-12-06 19:14:31.528574] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:16:00.611  [2024-12-06 19:14:31.529523] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:16:00.611  [2024-12-06 19:14:31.529546] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x0
00:16:00.611  [2024-12-06 19:14:31.530519] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:16:00.611  [2024-12-06 19:14:31.530544] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_LO with 0x10000007
00:16:00.611  [2024-12-06 19:14:31.531532] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:16:00.611  [2024-12-06 19:14:31.531555] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x1
00:16:00.611  [2024-12-06 19:14:31.532541] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:16:00.611  [2024-12-06 19:14:31.532569] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_HI with 0x5
00:16:00.611  [2024-12-06 19:14:31.532615] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10000007
00:16:00.611  [2024-12-06 19:14:31.533550] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:16:00.611  [2024-12-06 19:14:31.533573] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x0
00:16:00.611  [2024-12-06 19:14:31.534559] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:16:00.611  [2024-12-06 19:14:31.534583] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_LO with 0x3
00:16:00.611  [2024-12-06 19:14:31.534598] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x3
00:16:00.611  [2024-12-06 19:14:31.535568] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:16:00.611  [2024-12-06 19:14:31.535587] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x1
00:16:00.611  [2024-12-06 19:14:31.536575] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:16:00.611  [2024-12-06 19:14:31.536603] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_HI with 0x1
00:16:00.611  [2024-12-06 19:14:31.536623] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x100000003
00:16:00.611  [2024-12-06 19:14:31.536660] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100000003
00:16:00.611  [2024-12-06 19:14:31.537575] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.537600] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:16:00.611  [2024-12-06 19:14:31.537646] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:16:00.611  [2024-12-06 19:14:31.537675] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:16:00.611  [2024-12-06 19:14:31.538583] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.538606] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xb
00:16:00.611  [2024-12-06 19:14:31.538619] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:16:00.611  [2024-12-06 19:14:31.539601] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.611  [2024-12-06 19:14:31.539620] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:16:00.611  [2024-12-06 19:14:31.539665] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:16:00.611  [2024-12-06 19:14:31.540602] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.611  [2024-12-06 19:14:31.540621] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:16:00.611  [2024-12-06 19:14:31.541605] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:16:00.611  [2024-12-06 19:14:31.541624] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:16:00.611  [2024-12-06 19:14:31.541675] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:16:00.611  [2024-12-06 19:14:31.542614] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.611  [2024-12-06 19:14:31.542633] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:16:00.611  [2024-12-06 19:14:31.543625] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:16:00.611  [2024-12-06 19:14:31.543645] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x69aec000
00:16:00.611  [2024-12-06 19:14:31.544629] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:16:00.611  [2024-12-06 19:14:31.544649] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:16:00.611  [2024-12-06 19:14:31.545634] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:16:00.611  [2024-12-06 19:14:31.545654] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x69aed000
00:16:00.611  [2024-12-06 19:14:31.546644] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:16:00.611  [2024-12-06 19:14:31.546664] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:16:00.612  [2024-12-06 19:14:31.547658] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:16:00.612  [2024-12-06 19:14:31.547678] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x69aee000
00:16:00.612  [2024-12-06 19:14:31.548656] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:16:00.612  [2024-12-06 19:14:31.548679] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:16:00.612  [2024-12-06 19:14:31.549661] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:16:00.612  [2024-12-06 19:14:31.549680] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x0
00:16:00.612  [2024-12-06 19:14:31.550667] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:00.612  [2024-12-06 19:14:31.550687] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:16:00.612  [2024-12-06 19:14:31.550703] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 0
00:16:00.612  [2024-12-06 19:14:31.550715] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 0
00:16:00.612  [2024-12-06 19:14:31.550753] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 0 successfully
00:16:00.612  [2024-12-06 19:14:31.550803] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:16:00.612  [2024-12-06 19:14:31.550832] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069aec000
00:16:00.612  [2024-12-06 19:14:31.550849] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069aed000
00:16:00.612  [2024-12-06 19:14:31.550862] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069aee000
00:16:00.612  [2024-12-06 19:14:31.551666] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.612  [2024-12-06 19:14:31.551690] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:16:00.612  [2024-12-06 19:14:31.552677] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:16:00.612  [2024-12-06 19:14:31.552700] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:16:00.612  [2024-12-06 19:14:31.552745] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:16:00.612  [2024-12-06 19:14:31.553680] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.612  [2024-12-06 19:14:31.553707] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:16:00.612  [2024-12-06 19:14:31.554684] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:16:00.612  [2024-12-06 19:14:31.554708] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x69ae8000
00:16:00.612  [2024-12-06 19:14:31.555697] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:16:00.612  [2024-12-06 19:14:31.555720] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:16:00.612  [2024-12-06 19:14:31.556716] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:16:00.612  [2024-12-06 19:14:31.556741] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x69ae9000
00:16:00.612  [2024-12-06 19:14:31.557707] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:16:00.612  [2024-12-06 19:14:31.557731] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:16:00.612  [2024-12-06 19:14:31.558710] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:16:00.612  [2024-12-06 19:14:31.558733] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x69aea000
00:16:00.871  [2024-12-06 19:14:31.559716] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:16:00.871  [2024-12-06 19:14:31.559744] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:16:00.871  [2024-12-06 19:14:31.560726] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:16:00.871  [2024-12-06 19:14:31.560749] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x1
00:16:00.871  [2024-12-06 19:14:31.561728] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:00.871  [2024-12-06 19:14:31.561756] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:16:00.871  [2024-12-06 19:14:31.561768] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 1
00:16:00.871  [2024-12-06 19:14:31.561781] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 1
00:16:00.871  [2024-12-06 19:14:31.561794] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 1 successfully
00:16:00.871  [2024-12-06 19:14:31.561830] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:16:00.871  [2024-12-06 19:14:31.561864] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae8000
00:16:00.871  [2024-12-06 19:14:31.561879] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae9000
00:16:00.871  [2024-12-06 19:14:31.561892] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069aea000
00:16:00.871  [2024-12-06 19:14:31.562740] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.871  [2024-12-06 19:14:31.562766] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:16:00.871  [2024-12-06 19:14:31.563743] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:16:00.871  [2024-12-06 19:14:31.563762] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 2 PCI_COMMON_Q_SIZE with 0x100
00:16:00.871  [2024-12-06 19:14:31.563799] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 2, size 256
00:16:00.871  [2024-12-06 19:14:31.564749] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.871  [2024-12-06 19:14:31.564767] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:16:00.871  [2024-12-06 19:14:31.565756] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:16:00.871  [2024-12-06 19:14:31.565776] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCLO with 0x69ae4000
00:16:00.871  [2024-12-06 19:14:31.566770] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:16:00.871  [2024-12-06 19:14:31.566789] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCHI with 0x2000
00:16:00.871  [2024-12-06 19:14:31.567776] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:16:00.871  [2024-12-06 19:14:31.567795] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILLO with 0x69ae5000
00:16:00.871  [2024-12-06 19:14:31.568792] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:16:00.871  [2024-12-06 19:14:31.568811] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILHI with 0x2000
00:16:00.871  [2024-12-06 19:14:31.569792] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:16:00.871  [2024-12-06 19:14:31.569812] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDLO with 0x69ae6000
00:16:00.871  [2024-12-06 19:14:31.570799] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:16:00.871  [2024-12-06 19:14:31.570823] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDHI with 0x2000
00:16:00.871  [2024-12-06 19:14:31.571801] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:16:00.871  [2024-12-06 19:14:31.571820] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x2
00:16:00.871  [2024-12-06 19:14:31.572809] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:00.871  [2024-12-06 19:14:31.572828] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:16:00.871  [2024-12-06 19:14:31.572843] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 2
00:16:00.871  [2024-12-06 19:14:31.572854] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 2
00:16:00.871  [2024-12-06 19:14:31.572870] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 2 successfully
00:16:00.871  [2024-12-06 19:14:31.572916] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 2 addresses:
00:16:00.871  [2024-12-06 19:14:31.572944] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae4000
00:16:00.871  [2024-12-06 19:14:31.572962] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae5000
00:16:00.871  [2024-12-06 19:14:31.572974] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ae6000
00:16:00.871  [2024-12-06 19:14:31.573816] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.871  [2024-12-06 19:14:31.573844] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:16:00.871  [2024-12-06 19:14:31.574823] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:16:00.871  [2024-12-06 19:14:31.574851] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 3 PCI_COMMON_Q_SIZE with 0x100
00:16:00.871  [2024-12-06 19:14:31.574896] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 3, size 256
00:16:00.871  [2024-12-06 19:14:31.575828] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:00.871  [2024-12-06 19:14:31.575851] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:16:00.871  [2024-12-06 19:14:31.576843] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:16:00.871  [2024-12-06 19:14:31.576867] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCLO with 0x69ae0000
00:16:00.871  [2024-12-06 19:14:31.577839] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:16:00.871  [2024-12-06 19:14:31.577863] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCHI with 0x2000
00:16:00.871  [2024-12-06 19:14:31.578847] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:16:00.871  [2024-12-06 19:14:31.578870] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILLO with 0x69ae1000
00:16:00.871  [2024-12-06 19:14:31.579854] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:16:00.872  [2024-12-06 19:14:31.579877] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILHI with 0x2000
00:16:00.872  [2024-12-06 19:14:31.580865] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:16:00.872  [2024-12-06 19:14:31.580889] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDLO with 0x69ae2000
00:16:00.872  [2024-12-06 19:14:31.581872] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:16:00.872  [2024-12-06 19:14:31.581899] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDHI with 0x2000
00:16:00.872  [2024-12-06 19:14:31.582879] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:16:00.872  [2024-12-06 19:14:31.582909] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x3
00:16:00.872  [2024-12-06 19:14:31.583887] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:00.872  [2024-12-06 19:14:31.583910] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:16:00.872  [2024-12-06 19:14:31.583923] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 3
00:16:00.872  [2024-12-06 19:14:31.583936] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 3
00:16:00.872  [2024-12-06 19:14:31.583949] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 3 successfully
00:16:00.872  [2024-12-06 19:14:31.583984] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 3 addresses:
00:16:00.872  [2024-12-06 19:14:31.584019] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae0000
00:16:00.872  [2024-12-06 19:14:31.584033] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae1000
00:16:00.872  [2024-12-06 19:14:31.584047] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ae2000
00:16:00.872  [2024-12-06 19:14:31.584899] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.872  [2024-12-06 19:14:31.584917] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:16:00.872  [2024-12-06 19:14:31.584960] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:16:00.872  [2024-12-06 19:14:31.584993] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:16:00.872  [2024-12-06 19:14:31.585909] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:16:00.872  [2024-12-06 19:14:31.585928] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xf
00:16:00.872  [2024-12-06 19:14:31.585943] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:16:00.872  [2024-12-06 19:14:31.585954] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.scsi
00:16:00.872  [2024-12-06 19:14:31.588236] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.scsi is started with ret 0
00:16:00.872  [2024-12-06 19:14:31.589310] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:00.872  [2024-12-06 19:14:31.589335] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xf
00:16:00.872  [2024-12-06 19:14:31.589374] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:16:00.872  VirtioScsi0t0 VirtioScsi0t1
00:16:00.872   19:14:31 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type blk --trtype vfio-user --traddr /tmp/vfu_devices/vfu.blk VirtioBlk0
00:16:01.132  [2024-12-06 19:14:31.858891] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.blk: attached successfully
00:16:01.133  [2024-12-06 19:14:31.861069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.862056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.863075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.864088] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.865097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:16:01.133  [2024-12-06 19:14:31.865170] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f4f94304000
00:16:01.133  [2024-12-06 19:14:31.866107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.867099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.868121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.869112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.870120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:16:01.133  [2024-12-06 19:14:31.871637] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:16:01.133  [2024-12-06 19:14:31.880719] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user1, Path /tmp/vfu_devices/vfu.blk Setup Successfully
00:16:01.133  [2024-12-06 19:14:31.882232] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:16:01.133  [2024-12-06 19:14:31.883218] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.883247] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:16:01.133  [2024-12-06 19:14:31.883267] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:16:01.133  [2024-12-06 19:14:31.883285] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:16:01.133  [2024-12-06 19:14:31.884221] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.884242] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:16:01.133  [2024-12-06 19:14:31.884287] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:16:01.133  [2024-12-06 19:14:31.885232] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.885251] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:16:01.133  [2024-12-06 19:14:31.885288] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:16:01.133  [2024-12-06 19:14:31.885324] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:16:01.133  [2024-12-06 19:14:31.886236] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.886255] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x1
00:16:01.133  [2024-12-06 19:14:31.886271] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:16:01.133  [2024-12-06 19:14:31.887245] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.887272] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:16:01.133  [2024-12-06 19:14:31.887301] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:16:01.133  [2024-12-06 19:14:31.888255] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.888278] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:16:01.133  [2024-12-06 19:14:31.888312] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:16:01.133  [2024-12-06 19:14:31.888330] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:16:01.133  [2024-12-06 19:14:31.889269] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.889297] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x3
00:16:01.133  [2024-12-06 19:14:31.889310] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:16:01.133  [2024-12-06 19:14:31.890282] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.890301] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:16:01.133  [2024-12-06 19:14:31.890344] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:16:01.133  [2024-12-06 19:14:31.891297] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:16:01.133  [2024-12-06 19:14:31.891316] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x0
00:16:01.133  [2024-12-06 19:14:31.892308] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:16:01.133  [2024-12-06 19:14:31.892328] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_LO with 0x10007646
00:16:01.133  [2024-12-06 19:14:31.893327] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:16:01.133  [2024-12-06 19:14:31.893347] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x1
00:16:01.133  [2024-12-06 19:14:31.894321] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:16:01.133  [2024-12-06 19:14:31.894340] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_HI with 0x5
00:16:01.133  [2024-12-06 19:14:31.894378] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10007646
00:16:01.133  [2024-12-06 19:14:31.895334] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:16:01.133  [2024-12-06 19:14:31.895354] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x0
00:16:01.133  [2024-12-06 19:14:31.896331] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:16:01.133  [2024-12-06 19:14:31.896351] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_LO with 0x3446
00:16:01.133  [2024-12-06 19:14:31.896369] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x3446
00:16:01.133  [2024-12-06 19:14:31.897334] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:16:01.133  [2024-12-06 19:14:31.897357] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x1
00:16:01.133  [2024-12-06 19:14:31.898336] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:16:01.133  [2024-12-06 19:14:31.898360] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_HI with 0x1
00:16:01.133  [2024-12-06 19:14:31.898377] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x100003446
00:16:01.133  [2024-12-06 19:14:31.898422] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100003446
00:16:01.133  [2024-12-06 19:14:31.899357] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.899376] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:16:01.133  [2024-12-06 19:14:31.899413] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:16:01.133  [2024-12-06 19:14:31.899450] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:16:01.133  [2024-12-06 19:14:31.900362] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.900381] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xb
00:16:01.133  [2024-12-06 19:14:31.900400] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:16:01.133  [2024-12-06 19:14:31.901363] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.133  [2024-12-06 19:14:31.901389] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:16:01.133  [2024-12-06 19:14:31.901442] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:16:01.133  [2024-12-06 19:14:31.901480] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:16:01.133  [2024-12-06 19:14:31.902365] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:16:01.133  [2024-12-06 19:14:31.902411] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x14, length 0x4
00:16:01.133  [2024-12-06 19:14:31.903381] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2014-0x2017, len = 4
00:16:01.133  [2024-12-06 19:14:31.903433] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x8
00:16:01.133  [2024-12-06 19:14:31.904386] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2000-0x2007, len = 8
00:16:01.133  [2024-12-06 19:14:31.904431] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:16:01.133  [2024-12-06 19:14:31.905400] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:16:01.133  [2024-12-06 19:14:31.905451] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x8, length 0x4
00:16:01.133  [2024-12-06 19:14:31.906408] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2008-0x200B, len = 4
00:16:01.133  [2024-12-06 19:14:31.906452] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0xc, length 0x4
00:16:01.133  [2024-12-06 19:14:31.907422] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x200C-0x200F, len = 4
00:16:01.133  [2024-12-06 19:14:31.908427] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:16:01.133  [2024-12-06 19:14:31.908450] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:16:01.133  [2024-12-06 19:14:31.909432] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:16:01.133  [2024-12-06 19:14:31.909457] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:16:01.133  [2024-12-06 19:14:31.909504] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:16:01.133  [2024-12-06 19:14:31.910445] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:16:01.133  [2024-12-06 19:14:31.910468] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:16:01.133  [2024-12-06 19:14:31.911459] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:16:01.134  [2024-12-06 19:14:31.911485] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x69adc000
00:16:01.134  [2024-12-06 19:14:31.912459] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:16:01.134  [2024-12-06 19:14:31.912487] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:16:01.134  [2024-12-06 19:14:31.913465] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:16:01.134  [2024-12-06 19:14:31.913489] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x69add000
00:16:01.134  [2024-12-06 19:14:31.914476] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:16:01.134  [2024-12-06 19:14:31.914504] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:16:01.134  [2024-12-06 19:14:31.915484] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:16:01.134  [2024-12-06 19:14:31.915509] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x69ade000
00:16:01.134  [2024-12-06 19:14:31.916492] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:16:01.134  [2024-12-06 19:14:31.916516] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:16:01.134  [2024-12-06 19:14:31.917494] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:16:01.134  [2024-12-06 19:14:31.917519] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x0
00:16:01.134  [2024-12-06 19:14:31.918499] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:16:01.134  [2024-12-06 19:14:31.918525] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:16:01.134  [2024-12-06 19:14:31.918542] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 0
00:16:01.134  [2024-12-06 19:14:31.918558] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 0
00:16:01.134  [2024-12-06 19:14:31.918590] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 0 successfully
00:16:01.134  [2024-12-06 19:14:31.918638] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:16:01.134  [2024-12-06 19:14:31.918671] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069adc000
00:16:01.134  [2024-12-06 19:14:31.918686] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069add000
00:16:01.134  [2024-12-06 19:14:31.918700] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ade000
00:16:01.134  [2024-12-06 19:14:31.919510] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:16:01.134  [2024-12-06 19:14:31.919530] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:16:01.134  [2024-12-06 19:14:31.920512] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:16:01.134  [2024-12-06 19:14:31.920532] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:16:01.134  [2024-12-06 19:14:31.920569] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:16:01.134  [2024-12-06 19:14:31.921520] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:16:01.134  [2024-12-06 19:14:31.921539] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:16:01.134  [2024-12-06 19:14:31.922521] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:16:01.134  [2024-12-06 19:14:31.922541] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x69ad8000
00:16:01.134  [2024-12-06 19:14:31.923532] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:16:01.134  [2024-12-06 19:14:31.923552] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:16:01.134  [2024-12-06 19:14:31.924543] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:16:01.134  [2024-12-06 19:14:31.924563] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x69ad9000
00:16:01.134  [2024-12-06 19:14:31.925554] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:16:01.134  [2024-12-06 19:14:31.925577] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:16:01.134  [2024-12-06 19:14:31.926566] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:16:01.134  [2024-12-06 19:14:31.926585] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x69ada000
00:16:01.134  [2024-12-06 19:14:31.927574] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:16:01.134  [2024-12-06 19:14:31.927593] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:16:01.134  [2024-12-06 19:14:31.928582] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:16:01.134  [2024-12-06 19:14:31.928600] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x1
00:16:01.134  [2024-12-06 19:14:31.929582] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:16:01.134  [2024-12-06 19:14:31.929601] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:16:01.134  [2024-12-06 19:14:31.929617] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 1
00:16:01.134  [2024-12-06 19:14:31.929628] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 1
00:16:01.134  [2024-12-06 19:14:31.929644] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 1 successfully
00:16:01.134  [2024-12-06 19:14:31.929690] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:16:01.134  [2024-12-06 19:14:31.929718] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ad8000
00:16:01.134  [2024-12-06 19:14:31.929736] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ad9000
00:16:01.134  [2024-12-06 19:14:31.929747] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ada000
00:16:01.134  [2024-12-06 19:14:31.930589] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.134  [2024-12-06 19:14:31.930612] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:16:01.134  [2024-12-06 19:14:31.930657] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:16:01.134  [2024-12-06 19:14:31.930689] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:16:01.134  [2024-12-06 19:14:31.931600] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:16:01.134  [2024-12-06 19:14:31.931624] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xf
00:16:01.134  [2024-12-06 19:14:31.931636] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:16:01.134  [2024-12-06 19:14:31.931651] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.blk
00:16:01.134  [2024-12-06 19:14:31.933854] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.blk is started with ret 0
00:16:01.134  [2024-12-06 19:14:31.934946] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:01.134  [2024-12-06 19:14:31.934966] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xf
00:16:01.134  [2024-12-06 19:14:31.935011] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:16:01.134  VirtioBlk0
00:16:01.134   19:14:31 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /tmp/bdevperf.sock perform_tests
00:16:01.134  Running I/O for 30 seconds...
00:16:03.459      87856.00 IOPS,   343.19 MiB/s
[2024-12-06T18:14:35.348Z]     87823.00 IOPS,   343.06 MiB/s
[2024-12-06T18:14:36.286Z]     87868.00 IOPS,   343.23 MiB/s
[2024-12-06T18:14:37.224Z]     87876.50 IOPS,   343.27 MiB/s
[2024-12-06T18:14:38.162Z]     87866.00 IOPS,   343.23 MiB/s
[2024-12-06T18:14:39.101Z]     87848.83 IOPS,   343.16 MiB/s
[2024-12-06T18:14:40.480Z]     87867.43 IOPS,   343.23 MiB/s
[2024-12-06T18:14:41.414Z]     87742.00 IOPS,   342.74 MiB/s
[2024-12-06T18:14:42.353Z]     87754.89 IOPS,   342.79 MiB/s
[2024-12-06T18:14:43.290Z]     87745.00 IOPS,   342.75 MiB/s
[2024-12-06T18:14:44.227Z]     87762.73 IOPS,   342.82 MiB/s
[2024-12-06T18:14:45.164Z]     87765.33 IOPS,   342.83 MiB/s
[2024-12-06T18:14:46.539Z]     87758.85 IOPS,   342.81 MiB/s
[2024-12-06T18:14:47.477Z]     87758.71 IOPS,   342.81 MiB/s
[2024-12-06T18:14:48.415Z]     87771.87 IOPS,   342.86 MiB/s
[2024-12-06T18:14:49.367Z]     87781.38 IOPS,   342.90 MiB/s
[2024-12-06T18:14:50.344Z]     87784.12 IOPS,   342.91 MiB/s
[2024-12-06T18:14:51.279Z]     87770.11 IOPS,   342.85 MiB/s
[2024-12-06T18:14:52.218Z]     87767.63 IOPS,   342.84 MiB/s
[2024-12-06T18:14:53.178Z]     87768.75 IOPS,   342.85 MiB/s
[2024-12-06T18:14:54.555Z]     87772.90 IOPS,   342.86 MiB/s
[2024-12-06T18:14:55.491Z]     87779.91 IOPS,   342.89 MiB/s
[2024-12-06T18:14:56.424Z]     87789.48 IOPS,   342.93 MiB/s
[2024-12-06T18:14:57.356Z]     87800.29 IOPS,   342.97 MiB/s
[2024-12-06T18:14:58.291Z]     87797.68 IOPS,   342.96 MiB/s
[2024-12-06T18:14:59.227Z]     87802.54 IOPS,   342.98 MiB/s
[2024-12-06T18:15:00.163Z]     87802.63 IOPS,   342.98 MiB/s
[2024-12-06T18:15:01.540Z]     87788.61 IOPS,   342.92 MiB/s
[2024-12-06T18:15:02.479Z]     87774.48 IOPS,   342.87 MiB/s
[2024-12-06T18:15:02.479Z]     87762.27 IOPS,   342.82 MiB/s
00:16:31.529                                                                                                  Latency(us)
00:16:31.529  
[2024-12-06T18:15:02.479Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:31.529  Job: VirtioScsi0t0 (Core Mask 0x10, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:16:31.529  	 VirtioScsi0t0       :      30.01   20175.87      78.81       0.00     0.00   12680.49    1990.35   19612.25
00:16:31.529  Job: VirtioScsi0t1 (Core Mask 0x20, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:16:31.529  	 VirtioScsi0t1       :      30.01   20176.00      78.81       0.00     0.00   12680.63    1953.94   19612.25
00:16:31.529  Job: VirtioBlk0 (Core Mask 0x40, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:16:31.529  	 VirtioBlk0          :      30.01   47404.68     185.17       0.00     0.00    5395.04    1941.81    7670.14
00:16:31.529  
[2024-12-06T18:15:02.479Z]  ===================================================================================================================
00:16:31.529  
[2024-12-06T18:15:02.479Z]  Total                       :              87756.55     342.80       0.00     0.00    8745.37    1941.81   19612.25
00:16:31.529  {
00:16:31.529    "results": [
00:16:31.529      {
00:16:31.529        "job": "VirtioScsi0t0",
00:16:31.529        "core_mask": "0x10",
00:16:31.529        "workload": "randrw",
00:16:31.529        "percentage": 50,
00:16:31.529        "status": "finished",
00:16:31.529        "queue_depth": 256,
00:16:31.529        "io_size": 4096,
00:16:31.529        "runtime": 30.011103,
00:16:31.529        "iops": 20175.86624523597,
00:16:31.529        "mibps": 78.81197752045301,
00:16:31.529        "io_failed": 0,
00:16:31.529        "io_timeout": 0,
00:16:31.530        "avg_latency_us": 12680.493985576657,
00:16:31.530        "min_latency_us": 1990.3525925925926,
00:16:31.530        "max_latency_us": 19612.254814814816
00:16:31.530      },
00:16:31.530      {
00:16:31.530        "job": "VirtioScsi0t1",
00:16:31.530        "core_mask": "0x20",
00:16:31.530        "workload": "randrw",
00:16:31.530        "percentage": 50,
00:16:31.530        "status": "finished",
00:16:31.530        "queue_depth": 256,
00:16:31.530        "io_size": 4096,
00:16:31.530        "runtime": 30.011005,
00:16:31.530        "iops": 20175.99877111746,
00:16:31.530        "mibps": 78.81249519967758,
00:16:31.530        "io_failed": 0,
00:16:31.530        "io_timeout": 0,
00:16:31.530        "avg_latency_us": 12680.632822323003,
00:16:31.530        "min_latency_us": 1953.9437037037037,
00:16:31.530        "max_latency_us": 19612.254814814816
00:16:31.530      },
00:16:31.530      {
00:16:31.530        "job": "VirtioBlk0",
00:16:31.530        "core_mask": "0x40",
00:16:31.530        "workload": "randrw",
00:16:31.530        "percentage": 50,
00:16:31.530        "status": "finished",
00:16:31.530        "queue_depth": 256,
00:16:31.530        "io_size": 4096,
00:16:31.530        "runtime": 30.005664,
00:16:31.530        "iops": 47404.683329120795,
00:16:31.530        "mibps": 185.1745442543781,
00:16:31.530        "io_failed": 0,
00:16:31.530        "io_timeout": 0,
00:16:31.530        "avg_latency_us": 5395.039874195689,
00:16:31.530        "min_latency_us": 1941.8074074074075,
00:16:31.530        "max_latency_us": 7670.139259259259
00:16:31.530      }
00:16:31.530    ],
00:16:31.530    "core_count": 3
00:16:31.530  }
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@52 -- # killprocess 565219
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 565219 ']'
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 565219
00:16:31.530    19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:31.530    19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565219
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565219'
00:16:31.530  killing process with pid 565219
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 565219
00:16:31.530  Received shutdown signal, test time was about 30.000000 seconds
00:16:31.530  
00:16:31.530                                                                                                  Latency(us)
00:16:31.530  
[2024-12-06T18:15:02.480Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:31.530  
[2024-12-06T18:15:02.480Z]  ===================================================================================================================
00:16:31.530  
[2024-12-06T18:15:02.480Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:16:31.530   19:15:02 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 565219
00:16:31.530  [2024-12-06 19:15:02.224675] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:16:31.530  [2024-12-06 19:15:02.225558] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:16:31.530  [2024-12-06 19:15:02.225599] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:16:31.530  [2024-12-06 19:15:02.225624] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:16:31.530  [2024-12-06 19:15:02.225638] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:16:31.530  [2024-12-06 19:15:02.225663] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 0
00:16:31.530  [2024-12-06 19:15:02.225681] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 1
00:16:31.530  [2024-12-06 19:15:02.225697] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:16:31.530  [2024-12-06 19:15:02.226533] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:16:31.530  [2024-12-06 19:15:02.226561] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:16:31.530  [2024-12-06 19:15:02.226602] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:16:31.530  [2024-12-06 19:15:02.227532] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:16:31.530  [2024-12-06 19:15:02.227557] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:16:31.530  [2024-12-06 19:15:02.228540] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:16:31.530  [2024-12-06 19:15:02.228565] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:16:31.530  [2024-12-06 19:15:02.228580] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 0
00:16:31.530  [2024-12-06 19:15:02.228598] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:16:31.530  [2024-12-06 19:15:02.229555] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:16:31.530  [2024-12-06 19:15:02.229579] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:16:31.530  [2024-12-06 19:15:02.230564] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:16:31.530  [2024-12-06 19:15:02.230587] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:16:31.530  [2024-12-06 19:15:02.230600] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 1
00:16:31.530  [2024-12-06 19:15:02.230616] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:16:31.530  [2024-12-06 19:15:02.230683] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.blk
00:16:31.530  [2024-12-06 19:15:02.233338] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:16:31.530  [2024-12-06 19:15:02.264282] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:16:31.530  [2024-12-06 19:15:02.264425] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:16:31.530  [2024-12-06 19:15:02.264469] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:16:31.530  [2024-12-06 19:15:02.264486] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:16:31.530  [2024-12-06 19:15:02.264501] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:16:31.530  [2024-12-06 19:15:02.264526] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 0
00:16:31.530  [2024-12-06 19:15:02.264546] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 1
00:16:31.530  [2024-12-06 19:15:02.264558] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 2
00:16:31.530  [2024-12-06 19:15:02.264571] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 3
00:16:31.530  [2024-12-06 19:15:02.264582] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:16:31.530  [2024-12-06 19:15:02.264852] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:16:31.530  [2024-12-06 19:15:02.264881] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:16:31.530  [2024-12-06 19:15:02.264895] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:16:31.530  [2024-12-06 19:15:02.264923] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.blk
00:16:31.530  [2024-12-06 19:15:02.264953] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:16:31.530  [2024-12-06 19:15:02.264967] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:16:31.530  [2024-12-06 19:15:02.265432] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:16:31.530  [2024-12-06 19:15:02.265455] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:16:31.530  [2024-12-06 19:15:02.265493] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:16:31.530  [2024-12-06 19:15:02.266433] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:31.530  [2024-12-06 19:15:02.266453] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:16:31.530  [2024-12-06 19:15:02.267440] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:31.530  [2024-12-06 19:15:02.267459] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:16:31.530  [2024-12-06 19:15:02.267475] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 0
00:16:31.530  [2024-12-06 19:15:02.267487] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:16:31.530  [2024-12-06 19:15:02.268457] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:31.530  [2024-12-06 19:15:02.268477] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:16:31.530  [2024-12-06 19:15:02.269462] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:31.530  [2024-12-06 19:15:02.269481] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:16:31.530  [2024-12-06 19:15:02.269496] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 1
00:16:31.530  [2024-12-06 19:15:02.269507] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:16:31.530  [2024-12-06 19:15:02.270466] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:31.530  [2024-12-06 19:15:02.270485] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:16:31.530  [2024-12-06 19:15:02.271477] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:31.530  [2024-12-06 19:15:02.271496] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:16:31.530  [2024-12-06 19:15:02.271511] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 2
00:16:31.530  [2024-12-06 19:15:02.271525] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 2 isn't enabled
00:16:31.530  [2024-12-06 19:15:02.272485] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:16:31.531  [2024-12-06 19:15:02.272504] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:16:31.531  [2024-12-06 19:15:02.273493] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:16:31.531  [2024-12-06 19:15:02.273512] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:16:31.531  [2024-12-06 19:15:02.273530] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 3
00:16:31.531  [2024-12-06 19:15:02.273541] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 3 isn't enabled
00:16:31.531  [2024-12-06 19:15:02.273612] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.scsi
00:16:31.531  [2024-12-06 19:15:02.276192] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:16:31.531  [2024-12-06 19:15:02.306721] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:16:31.531  [2024-12-06 19:15:02.306745] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:16:31.531  [2024-12-06 19:15:02.306761] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:16:31.531  [2024-12-06 19:15:02.306786] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.scsi
00:16:31.531  [2024-12-06 19:15:02.306804] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:16:31.531  [2024-12-06 19:15:02.306816] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:16:35.725   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@53 -- # trap - SIGINT SIGTERM EXIT
00:16:35.725   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.blk
00:16:35.725  [2024-12-06 19:15:06.544809] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.blk
00:16:35.725   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.scsi
00:16:35.984  [2024-12-06 19:15:06.845881] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.scsi
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@59 -- # killprocess 564807
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 564807 ']'
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 564807
00:16:35.984    19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:35.984    19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 564807
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 564807'
00:16:35.984  killing process with pid 564807
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 564807
00:16:35.984   19:15:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 564807
00:16:39.270  
00:16:39.270  real	0m44.086s
00:16:39.270  user	5m6.908s
00:16:39.270  sys	0m2.836s
00:16:39.270   19:15:09 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:39.270   19:15:09 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:39.270  ************************************
00:16:39.270  END TEST vfio_user_virtio_bdevperf
00:16:39.270  ************************************
00:16:39.270   19:15:09 vfio_user_qemu -- vfio_user/vfio_user.sh@20 -- # [[ y == y ]]
00:16:39.270   19:15:09 vfio_user_qemu -- vfio_user/vfio_user.sh@21 -- # run_test vfio_user_virtio_fs_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:16:39.270   19:15:09 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:39.270   19:15:09 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:39.270   19:15:09 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:16:39.270  ************************************
00:16:39.270  START TEST vfio_user_virtio_fs_fio
00:16:39.270  ************************************
00:16:39.270   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:16:39.270  * Looking for test storage...
00:16:39.270  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # lcov --version
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # IFS=.-:
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # read -ra ver1
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # IFS=.-:
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # read -ra ver2
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@338 -- # local 'op=<'
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@340 -- # ver1_l=2
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@341 -- # ver2_l=1
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@344 -- # case "$op" in
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@345 -- # : 1
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # decimal 1
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=1
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 1
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # decimal 2
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=2
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 2
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # return 0
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:39.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:39.270  		--rc genhtml_branch_coverage=1
00:16:39.270  		--rc genhtml_function_coverage=1
00:16:39.270  		--rc genhtml_legend=1
00:16:39.270  		--rc geninfo_all_blocks=1
00:16:39.270  		--rc geninfo_unexecuted_blocks=1
00:16:39.270  		
00:16:39.270  		'
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:39.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:39.270  		--rc genhtml_branch_coverage=1
00:16:39.270  		--rc genhtml_function_coverage=1
00:16:39.270  		--rc genhtml_legend=1
00:16:39.270  		--rc geninfo_all_blocks=1
00:16:39.270  		--rc geninfo_unexecuted_blocks=1
00:16:39.270  		
00:16:39.270  		'
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:39.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:39.270  		--rc genhtml_branch_coverage=1
00:16:39.270  		--rc genhtml_function_coverage=1
00:16:39.270  		--rc genhtml_legend=1
00:16:39.270  		--rc geninfo_all_blocks=1
00:16:39.270  		--rc geninfo_unexecuted_blocks=1
00:16:39.270  		
00:16:39.270  		'
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:39.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:39.270  		--rc genhtml_branch_coverage=1
00:16:39.270  		--rc genhtml_function_coverage=1
00:16:39.270  		--rc genhtml_legend=1
00:16:39.270  		--rc geninfo_all_blocks=1
00:16:39.270  		--rc geninfo_unexecuted_blocks=1
00:16:39.270  		
00:16:39.270  		'
00:16:39.270   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@6 -- # : 128
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@7 -- # : 512
00:16:39.270    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@6 -- # : false
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@9 -- # : qemu-img
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:16:39.270       19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:16:39.270     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:16:39.270      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:16:39.271     19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:16:39.271      19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:16:39.271       19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:16:39.271        19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:16:39.271        19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:16:39.271        19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:16:39.271        19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:16:39.271       19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # get_vhost_dir 0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@16 -- # vhosttestinit
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@18 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@20 -- # vfu_tgt_run 0
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@6 -- # local vhost_name=0
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # get_vhost_dir 0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:16:39.271    19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@17 -- # vfupid=570670
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@18 -- # echo 570670
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@20 -- # echo 'Process pid: 570670'
00:16:39.271  Process pid: 570670
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:16:39.271  waiting for app to run...
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@22 -- # waitforlisten 570670 /root/vhost_test/vhost/0/rpc.sock
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@835 -- # '[' -z 570670 ']'
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:16:39.271  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:39.271   19:15:09 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:16:39.271  [2024-12-06 19:15:09.989163] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:16:39.271  [2024-12-06 19:15:09.989313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570670 ]
00:16:39.271  EAL: No free 2048 kB hugepages reported on node 1
00:16:39.529  [2024-12-06 19:15:10.367935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:39.786  [2024-12-06 19:15:10.490379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:39.786  [2024-12-06 19:15:10.490424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:16:39.786  [2024-12-06 19:15:10.490481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:39.786  [2024-12-06 19:15:10.490489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@868 -- # return 0
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@22 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@23 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@24 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@27 -- # disk_no=1
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@28 -- # vm_num=1
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@29 -- # job_file=default_fsdev.job
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@30 -- # be_virtiofs_dir=/tmp/vfio-test.1
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@31 -- # vm_virtiofs_dir=/tmp/virtiofs.1
00:16:40.367   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:16:40.931   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@35 -- # rm -rf /tmp/vfio-test.1
00:16:40.931   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@36 -- # mkdir -p /tmp/vfio-test.1
00:16:40.931    19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # mktemp --tmpdir=/tmp/vfio-test.1
00:16:40.931   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # tmpfile=/tmp/vfio-test.1/tmp.JRvLIY8TjI
00:16:40.931   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@41 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock fsdev_aio_create aio.1 /tmp/vfio-test.1
00:16:41.188  aio.1
00:16:41.188   19:15:11 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_fs_endpoint virtio.1 --fsdev-name aio.1 --tag vfu_test.1 --num-queues=2 --qsize=512 --packed-ring
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@45 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@518 -- # xtrace_disable
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:16:41.446  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:16:41.446  INFO: Creating new VM in /root/vhost_test/vms/1
00:16:41.446  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:16:41.446  INFO: TASK MASK: 6-7
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@671 -- # local node_num=0
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:16:41.446  INFO: NUMA NODE: 0
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # IFS=,
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@704 -- # case $disk_type in
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:16:41.446  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@785 -- # (( 0 ))
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:16:41.446  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # cat
00:16:41.446    19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@827 -- # echo 10100
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@828 -- # echo 10101
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@829 -- # echo 10102
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:16:41.446   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@834 -- # echo 10104
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@835 -- # echo 101
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@46 -- # vm_run 1
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@843 -- # local run_all=false
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@856 -- # false
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@859 -- # shift 0
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:16:41.447  INFO: running /root/vhost_test/vms/1/run.sh
00:16:41.447   19:15:12 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:16:41.447  Running VM in /root/vhost_test/vms/1
00:16:41.706  [2024-12-06 19:15:12.639721] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:16:41.965  Waiting for QEMU pid file
00:16:42.899  === qemu.log ===
00:16:42.899  === qemu.log ===
00:16:42.899   19:15:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@47 -- # vm_wait_for_boot 60 1
00:16:42.899   19:15:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@913 -- # assert_number 60
00:16:42.899   19:15:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:16:42.899   19:15:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # return 0
00:16:42.899   19:15:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@915 -- # xtrace_disable
00:16:42.899   19:15:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:16:42.899  INFO: Waiting for VMs to boot
00:16:42.899  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:17:04.832  
00:17:04.832  INFO: VM1 ready
00:17:04.832  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:04.832  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:04.832  INFO: all VMs ready
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@973 -- # return 0
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@49 -- # vm_exec 1 'mkdir /tmp/virtiofs.1'
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:04.832    19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:04.832    19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:04.832    19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832    19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832    19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:04.832    19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:04.832   19:15:34 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mkdir /tmp/virtiofs.1'
00:17:04.832  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@50 -- # vm_exec 1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:17:04.832  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # basename /tmp/vfio-test.1/tmp.JRvLIY8TjI
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # vm_exec 1 'ls /tmp/virtiofs.1/tmp.JRvLIY8TjI'
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'ls /tmp/virtiofs.1/tmp.JRvLIY8TjI'
00:17:04.832  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:04.832  /tmp/virtiofs.1/tmp.JRvLIY8TjI
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@53 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@978 -- # local readonly=
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@979 -- # local fio_bin=
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@993 -- # shift 1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:17:04.832  INFO: Starting fio server on VM1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:17:04.832  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:04.832    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:04.832   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:17:04.832  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@54 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job --out=/root/vhost_test/fio_results --vm=1:/tmp/virtiofs.1/test
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1053 -- # local arg
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1054 -- # local job_file=
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # vms=()
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # local vms
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1057 -- # local out=
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1058 -- # local vm
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job ]]
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1108 -- # local job_fname
00:17:05.092    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # job_fname=default_fsdev.job
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1110 -- # log_fname=default_fsdev.log
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal '
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1116 -- # local vmdisks=/tmp/virtiofs.1/test
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/tmp/virtiofs.1/test@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_fsdev.job'
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:05.092   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:05.092    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:05.092    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:05.093    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:05.093    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:05.093    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:05.093    19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:05.093   19:15:35 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_fsdev.job'
00:17:05.093  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1121 -- # false
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_fsdev.job
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:05.093    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:05.093    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:05.093    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:05.093    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:05.093    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:05.093    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:05.093   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_fsdev.job
00:17:05.093  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:05.357  [global]
00:17:05.357  blocksize=4k
00:17:05.357  iodepth=512
00:17:05.357  ioengine=libaio
00:17:05.357  size=1G
00:17:05.357  group_reporting
00:17:05.357  thread
00:17:05.357  numjobs=1
00:17:05.357  direct=1
00:17:05.357  invalidate=1
00:17:05.357  rw=randrw
00:17:05.357  do_verify=1
00:17:05.357  filename=/tmp/virtiofs.1/test
00:17:05.357  [job0]
00:17:05.357   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1127 -- # true
00:17:05.357    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:17:05.357    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:17:05.357    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:05.357    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:05.357    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:17:05.357    19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:17:05.357   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_fsdev.job '
00:17:05.357   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1131 -- # true
00:17:05.357   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1147 -- # true
00:17:05.357   19:15:36 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_fsdev.job
00:17:31.910   19:15:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1162 -- # sleep 1
00:17:31.910   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:17:31.910   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:17:31.910   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_fsdev.log
00:17:31.910  hostname=vhostfedora-cloud-23052, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:17:31.910  <vhostfedora-cloud-23052> job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
00:17:31.910  <vhostfedora-cloud-23052> Starting 1 thread
00:17:31.910  <vhostfedora-cloud-23052> job0: Laying out IO file (1 file / 1024MiB)
00:17:31.910  <vhostfedora-cloud-23052> 
00:17:31.910  job0: (groupid=0, jobs=1): err= 0: pid=968: Fri Dec  6 19:15:59 2024
00:17:31.910    read: IOPS=29.8k, BW=117MiB/s (122MB/s)(512MiB/4392msec)
00:17:31.910      slat (usec): min=2, max=155, avg= 3.21, stdev= 2.61
00:17:31.910      clat (usec): min=2172, max=16844, avg=8606.03, stdev=345.16
00:17:31.910       lat (usec): min=2175, max=16847, avg=8609.24, stdev=345.19
00:17:31.910      clat percentiles (usec):
00:17:31.910       |  1.00th=[ 8225],  5.00th=[ 8356], 10.00th=[ 8455], 20.00th=[ 8455],
00:17:31.910       | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8586],
00:17:31.910       | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8848], 95.00th=[ 8848],
00:17:31.910       | 99.00th=[ 8979], 99.50th=[ 8979], 99.90th=[12649], 99.95th=[15270],
00:17:31.910       | 99.99th=[16712]
00:17:31.910     bw (  KiB/s): min=118224, max=122336, per=100.00%, avg=119596.00, stdev=1469.84, samples=8
00:17:31.910     iops        : min=29556, max=30584, avg=29899.00, stdev=367.46, samples=8
00:17:31.910    write: IOPS=29.8k, BW=117MiB/s (122MB/s)(512MiB/4392msec); 0 zone resets
00:17:31.910      slat (usec): min=2, max=939, avg= 3.68, stdev= 3.87
00:17:31.910      clat (usec): min=2080, max=16849, avg=8531.92, stdev=341.11
00:17:31.910       lat (usec): min=2083, max=16853, avg=8535.60, stdev=341.13
00:17:31.910      clat percentiles (usec):
00:17:31.910       |  1.00th=[ 8094],  5.00th=[ 8291], 10.00th=[ 8356], 20.00th=[ 8455],
00:17:31.910       | 30.00th=[ 8455], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8586],
00:17:31.910       | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8848],
00:17:31.910       | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[12518], 99.95th=[14746],
00:17:31.911       | 99.99th=[16712]
00:17:31.911     bw (  KiB/s): min=117776, max=121160, per=100.00%, avg=119412.00, stdev=1129.24, samples=8
00:17:31.911     iops        : min=29444, max=30290, avg=29853.00, stdev=282.31, samples=8
00:17:31.911    lat (msec)   : 4=0.10%, 10=99.74%, 20=0.16%
00:17:31.911    cpu          : usr=10.84%, sys=20.93%, ctx=9117, majf=0, minf=7
00:17:31.911    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:17:31.911       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:17:31.911       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:17:31.911       issued rwts: total=131040,131104,0,0 short=0,0,0,0 dropped=0,0,0,0
00:17:31.911       latency   : target=0, window=0, percentile=100.00%, depth=512
00:17:31.911  
00:17:31.911  Run status group 0 (all jobs):
00:17:31.911     READ: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=512MiB (537MB), run=4392-4392msec
00:17:31.911    WRITE: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=512MiB (537MB), run=4392-4392msec
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@55 -- # vm_exec 1 'umount /tmp/virtiofs.1'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'umount /tmp/virtiofs.1'
00:17:31.911  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@58 -- # notice 'Shutting down virtual machine...'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:17:31.911  INFO: Shutting down virtual machine...
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@59 -- # vm_shutdown_all
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vm_list_all
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # vms=()
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # local vms
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=571002
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 571002
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:17:31.911  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@432 -- # set +e
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:17:31.911    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:17:31.911  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:17:31.911  INFO: VM1 is shutting down - wait a while to complete
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@435 -- # set -e
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:17:31.911  INFO: Waiting for VMs to shutdown...
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:17:31.911   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:17:31.912    19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:17:31.912   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=571002
00:17:31.912   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 571002
00:17:31.912   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:17:31.912   19:16:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:17:31.912   19:16:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:17:31.912  INFO: All VMs successfully shut down
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@505 -- # return 0
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@61 -- # vhost_kill 0
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@202 -- # local rc=0
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@210 -- # local vhost_dir
00:17:31.912    19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:17:31.912    19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:17:31.912    19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:17:31.912    19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@220 -- # local vhost_pid
00:17:31.912    19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # vhost_pid=570670
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 570670) app'
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 570670) app'
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 570670) app'
00:17:31.912  INFO: killing vhost (PID 570670) app
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@224 -- # kill -INT 570670
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:17:31.912  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 570670
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:17:31.912  .
00:17:31.912   19:16:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:17:33.292  [2024-12-06 19:16:03.800412] vfu_virtio_fs.c: 301:_vfu_virtio_fs_fuse_dispatcher_delete_cpl: *NOTICE*: FUSE dispatcher deleted
00:17:33.292   19:16:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:17:33.292   19:16:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:33.292   19:16:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 570670
00:17:33.292   19:16:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:17:33.292  .
00:17:33.292   19:16:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:17:34.230   19:16:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:17:34.230   19:16:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:34.230   19:16:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 570670
00:17:34.230   19:16:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:17:34.230  .
00:17:34.230   19:16:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 570670
00:17:35.165  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (570670) - No such process
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@231 -- # break
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@234 -- # kill -0 570670
00:17:35.165  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (570670) - No such process
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@239 -- # kill -0 570670
00:17:35.165  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (570670) - No such process
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@245 -- # is_pid_child 570670
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1686 -- # local pid=570670 _pid
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:17:35.165    19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1685 -- # jobs -pr
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1692 -- # return 1
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@261 -- # return 0
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@63 -- # vhosttestfini
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:17:35.165  
00:17:35.165  real	0m56.171s
00:17:35.165  user	3m34.279s
00:17:35.165  sys	0m3.672s
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:35.165   19:16:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:17:35.165  ************************************
00:17:35.166  END TEST vfio_user_virtio_fs_fio
00:17:35.166  ************************************
00:17:35.166   19:16:05 vfio_user_qemu -- vfio_user/vfio_user.sh@26 -- # vhosttestfini
00:17:35.166   19:16:05 vfio_user_qemu -- vhost/common.sh@54 -- # '[' iso == iso ']'
00:17:35.166   19:16:05 vfio_user_qemu -- vhost/common.sh@55 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:17:36.104  Waiting for block devices as requested
00:17:36.362  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:17:36.362  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:17:36.362  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:17:36.623  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:17:36.623  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:17:36.623  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:17:36.623  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:17:36.882  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:17:36.882  0000:0b:00.0 (8086 0a54): vfio-pci -> nvme
00:17:37.142  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:17:37.142  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:17:37.142  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:17:37.142  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:17:37.417  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:17:37.417  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:17:37.417  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:17:37.417  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:17:37.675  
00:17:37.675  real	6m36.333s
00:17:37.675  user	26m57.788s
00:17:37.675  sys	0m20.396s
00:17:37.675   19:16:08 vfio_user_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:37.675   19:16:08 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:17:37.675  ************************************
00:17:37.675  END TEST vfio_user_qemu
00:17:37.675  ************************************
00:17:37.675   19:16:08  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:17:37.675   19:16:08  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:17:37.675   19:16:08  -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]]
00:17:37.675   19:16:08  -- spdk/autotest.sh@371 -- # run_test sma /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:17:37.675   19:16:08  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:37.675   19:16:08  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:37.675   19:16:08  -- common/autotest_common.sh@10 -- # set +x
00:17:37.675  ************************************
00:17:37.675  START TEST sma
00:17:37.675  ************************************
00:17:37.675   19:16:08 sma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:17:37.675  * Looking for test storage...
00:17:37.675  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:37.675    19:16:08 sma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:37.675     19:16:08 sma -- common/autotest_common.sh@1711 -- # lcov --version
00:17:37.675     19:16:08 sma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:37.934    19:16:08 sma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:37.934    19:16:08 sma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:37.934    19:16:08 sma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:37.934    19:16:08 sma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:37.934    19:16:08 sma -- scripts/common.sh@336 -- # IFS=.-:
00:17:37.934    19:16:08 sma -- scripts/common.sh@336 -- # read -ra ver1
00:17:37.934    19:16:08 sma -- scripts/common.sh@337 -- # IFS=.-:
00:17:37.934    19:16:08 sma -- scripts/common.sh@337 -- # read -ra ver2
00:17:37.934    19:16:08 sma -- scripts/common.sh@338 -- # local 'op=<'
00:17:37.934    19:16:08 sma -- scripts/common.sh@340 -- # ver1_l=2
00:17:37.934    19:16:08 sma -- scripts/common.sh@341 -- # ver2_l=1
00:17:37.934    19:16:08 sma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:37.934    19:16:08 sma -- scripts/common.sh@344 -- # case "$op" in
00:17:37.934    19:16:08 sma -- scripts/common.sh@345 -- # : 1
00:17:37.934    19:16:08 sma -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:37.934    19:16:08 sma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:37.934     19:16:08 sma -- scripts/common.sh@365 -- # decimal 1
00:17:37.934     19:16:08 sma -- scripts/common.sh@353 -- # local d=1
00:17:37.934     19:16:08 sma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:37.934     19:16:08 sma -- scripts/common.sh@355 -- # echo 1
00:17:37.934    19:16:08 sma -- scripts/common.sh@365 -- # ver1[v]=1
00:17:37.934     19:16:08 sma -- scripts/common.sh@366 -- # decimal 2
00:17:37.934     19:16:08 sma -- scripts/common.sh@353 -- # local d=2
00:17:37.934     19:16:08 sma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:37.934     19:16:08 sma -- scripts/common.sh@355 -- # echo 2
00:17:37.934    19:16:08 sma -- scripts/common.sh@366 -- # ver2[v]=2
00:17:37.934    19:16:08 sma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:37.934    19:16:08 sma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:37.934    19:16:08 sma -- scripts/common.sh@368 -- # return 0
00:17:37.934    19:16:08 sma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:37.934    19:16:08 sma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:37.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.934  		--rc genhtml_branch_coverage=1
00:17:37.934  		--rc genhtml_function_coverage=1
00:17:37.934  		--rc genhtml_legend=1
00:17:37.934  		--rc geninfo_all_blocks=1
00:17:37.934  		--rc geninfo_unexecuted_blocks=1
00:17:37.934  		
00:17:37.934  		'
00:17:37.934    19:16:08 sma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:37.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.934  		--rc genhtml_branch_coverage=1
00:17:37.934  		--rc genhtml_function_coverage=1
00:17:37.934  		--rc genhtml_legend=1
00:17:37.934  		--rc geninfo_all_blocks=1
00:17:37.934  		--rc geninfo_unexecuted_blocks=1
00:17:37.934  		
00:17:37.934  		'
00:17:37.934    19:16:08 sma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:37.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.934  		--rc genhtml_branch_coverage=1
00:17:37.934  		--rc genhtml_function_coverage=1
00:17:37.934  		--rc genhtml_legend=1
00:17:37.934  		--rc geninfo_all_blocks=1
00:17:37.934  		--rc geninfo_unexecuted_blocks=1
00:17:37.934  		
00:17:37.934  		'
00:17:37.934    19:16:08 sma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:37.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.934  		--rc genhtml_branch_coverage=1
00:17:37.934  		--rc genhtml_function_coverage=1
00:17:37.934  		--rc genhtml_legend=1
00:17:37.934  		--rc geninfo_all_blocks=1
00:17:37.934  		--rc geninfo_unexecuted_blocks=1
00:17:37.934  		
00:17:37.934  		'
00:17:37.934   19:16:08 sma -- sma/sma.sh@11 -- # run_test sma_nvmf_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:17:37.934   19:16:08 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:37.934   19:16:08 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:37.934   19:16:08 sma -- common/autotest_common.sh@10 -- # set +x
00:17:37.934  ************************************
00:17:37.934  START TEST sma_nvmf_tcp
00:17:37.934  ************************************
00:17:37.934   19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:17:37.934  * Looking for test storage...
00:17:37.934  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:17:37.934    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:17:37.934     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:37.935     19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:37.935  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.935  		--rc genhtml_branch_coverage=1
00:17:37.935  		--rc genhtml_function_coverage=1
00:17:37.935  		--rc genhtml_legend=1
00:17:37.935  		--rc geninfo_all_blocks=1
00:17:37.935  		--rc geninfo_unexecuted_blocks=1
00:17:37.935  		
00:17:37.935  		'
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:37.935  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.935  		--rc genhtml_branch_coverage=1
00:17:37.935  		--rc genhtml_function_coverage=1
00:17:37.935  		--rc genhtml_legend=1
00:17:37.935  		--rc geninfo_all_blocks=1
00:17:37.935  		--rc geninfo_unexecuted_blocks=1
00:17:37.935  		
00:17:37.935  		'
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:37.935  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.935  		--rc genhtml_branch_coverage=1
00:17:37.935  		--rc genhtml_function_coverage=1
00:17:37.935  		--rc genhtml_legend=1
00:17:37.935  		--rc geninfo_all_blocks=1
00:17:37.935  		--rc geninfo_unexecuted_blocks=1
00:17:37.935  		
00:17:37.935  		'
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:37.935  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:37.935  		--rc genhtml_branch_coverage=1
00:17:37.935  		--rc genhtml_function_coverage=1
00:17:37.935  		--rc genhtml_legend=1
00:17:37.935  		--rc geninfo_all_blocks=1
00:17:37.935  		--rc geninfo_unexecuted_blocks=1
00:17:37.935  		
00:17:37.935  		'
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@70 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@73 -- # tgtpid=578476
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@72 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@83 -- # smapid=578477
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@86 -- # sma_waitforlisten
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/common.sh@8 -- # local sma_port=8080
00:17:37.935    19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # cat
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i = 0 ))
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:37.935   19:16:08 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:17:38.193  [2024-12-06 19:16:08.941788] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:17:38.193  [2024-12-06 19:16:08.941924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578476 ]
00:17:38.193  EAL: No free 2048 kB hugepages reported on node 1
00:17:38.193  [2024-12-06 19:16:09.071148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:38.452  [2024-12-06 19:16:09.185779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:39.018   19:16:09 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:17:39.018   19:16:09 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:17:39.018   19:16:09 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:39.018   19:16:09 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:17:39.277  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:39.277  I0000 00:00:1733508970.198123  578477 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:39.534  [2024-12-06 19:16:10.271686] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- sma/common.sh@12 -- # return 0
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@89 -- # rpc_cmd bdev_null_create null0 100 4096
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:40.099  null0
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@92 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:40.099  [
00:17:40.099  {
00:17:40.099  "trtype": "TCP",
00:17:40.099  "max_queue_depth": 128,
00:17:40.099  "max_io_qpairs_per_ctrlr": 127,
00:17:40.099  "in_capsule_data_size": 4096,
00:17:40.099  "max_io_size": 131072,
00:17:40.099  "io_unit_size": 131072,
00:17:40.099  "max_aq_depth": 128,
00:17:40.099  "num_shared_buffers": 511,
00:17:40.099  "buf_cache_size": 4294967295,
00:17:40.099  "dif_insert_or_strip": false,
00:17:40.099  "zcopy": false,
00:17:40.099  "c2h_success": true,
00:17:40.099  "sock_priority": 0,
00:17:40.099  "abort_timeout_sec": 1,
00:17:40.099  "ack_timeout": 0,
00:17:40.099  "data_wr_pool_size": 0
00:17:40.099  }
00:17:40.099  ]
00:17:40.099   19:16:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.099    19:16:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # create_device nqn.2016-06.io.spdk:cnode0
00:17:40.099    19:16:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:40.099    19:16:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # jq -r .handle
00:17:40.358  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:40.358  I0000 00:00:1733508971.216959  578782 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:40.358  I0000 00:00:1733508971.218963  578782 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:40.358  I0000 00:00:1733508971.234434  578787 subchannel.cc:806] subchannel 0x55d5c4bf6560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d5c4c0cf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d5c4bc36e0, grpc.internal.client_channel_call_destination=0x7fe8ff9ee390, grpc.internal.event_engine=0x55d5c4bf25b0, grpc.internal.security_connector=0x55d5c4b76fb0, grpc.internal.subchannel_pool=0x55d5c4c46410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d5c4b10a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:11.233776483+01:00"}), backing off for 1000 ms
00:17:40.358  [2024-12-06 19:16:11.255283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:40.358   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:40.358   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@96 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:40.358   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.358   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:40.358  [
00:17:40.358  {
00:17:40.358  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:17:40.358  "subtype": "NVMe",
00:17:40.358  "listen_addresses": [
00:17:40.358  {
00:17:40.358  "trtype": "TCP",
00:17:40.358  "adrfam": "IPv4",
00:17:40.358  "traddr": "127.0.0.1",
00:17:40.358  "trsvcid": "4420"
00:17:40.358  }
00:17:40.358  ],
00:17:40.358  "allow_any_host": false,
00:17:40.358  "hosts": [],
00:17:40.358  "serial_number": "00000000000000000000",
00:17:40.358  "model_number": "SPDK bdev Controller",
00:17:40.358  "max_namespaces": 32,
00:17:40.358  "min_cntlid": 1,
00:17:40.358  "max_cntlid": 65519,
00:17:40.358  "namespaces": []
00:17:40.358  }
00:17:40.358  ]
00:17:40.358   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.358    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # create_device nqn.2016-06.io.spdk:cnode1
00:17:40.358    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # jq -r .handle
00:17:40.358    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:40.616  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:40.616  I0000 00:00:1733508971.520228  578811 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:40.616  I0000 00:00:1733508971.522047  578811 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:40.616  I0000 00:00:1733508971.523637  578813 subchannel.cc:806] subchannel 0x55f0efcc0560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f0efcd6f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f0efc8d6e0, grpc.internal.client_channel_call_destination=0x7f3bb8c6d390, grpc.internal.event_engine=0x55f0efcbc5b0, grpc.internal.security_connector=0x55f0efc40fb0, grpc.internal.subchannel_pool=0x55f0efd10410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f0efbdaa60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:11.523152879+01:00"}), backing off for 999 ms
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@99 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:40.875  [
00:17:40.875  {
00:17:40.875  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:17:40.875  "subtype": "NVMe",
00:17:40.875  "listen_addresses": [
00:17:40.875  {
00:17:40.875  "trtype": "TCP",
00:17:40.875  "adrfam": "IPv4",
00:17:40.875  "traddr": "127.0.0.1",
00:17:40.875  "trsvcid": "4420"
00:17:40.875  }
00:17:40.875  ],
00:17:40.875  "allow_any_host": false,
00:17:40.875  "hosts": [],
00:17:40.875  "serial_number": "00000000000000000000",
00:17:40.875  "model_number": "SPDK bdev Controller",
00:17:40.875  "max_namespaces": 32,
00:17:40.875  "min_cntlid": 1,
00:17:40.875  "max_cntlid": 65519,
00:17:40.875  "namespaces": []
00:17:40.875  }
00:17:40.875  ]
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@100 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:40.875  [
00:17:40.875  {
00:17:40.875  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:40.875  "subtype": "NVMe",
00:17:40.875  "listen_addresses": [
00:17:40.875  {
00:17:40.875  "trtype": "TCP",
00:17:40.875  "adrfam": "IPv4",
00:17:40.875  "traddr": "127.0.0.1",
00:17:40.875  "trsvcid": "4420"
00:17:40.875  }
00:17:40.875  ],
00:17:40.875  "allow_any_host": false,
00:17:40.875  "hosts": [],
00:17:40.875  "serial_number": "00000000000000000000",
00:17:40.875  "model_number": "SPDK bdev Controller",
00:17:40.875  "max_namespaces": 32,
00:17:40.875  "min_cntlid": 1,
00:17:40.875  "max_cntlid": 65519,
00:17:40.875  "namespaces": []
00:17:40.875  }
00:17:40.875  ]
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@101 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 != \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # rpc_cmd nvmf_get_subsystems
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # jq -r '. | length'
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.875   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # [[ 3 -eq 3 ]]
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # create_device nqn.2016-06.io.spdk:cnode0
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:40.875    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # jq -r .handle
00:17:41.134  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:41.134  I0000 00:00:1733508971.868835  578839 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:41.134  I0000 00:00:1733508971.870681  578839 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:41.134  I0000 00:00:1733508971.872299  578963 subchannel.cc:806] subchannel 0x5588c344a560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5588c3460f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5588c34176e0, grpc.internal.client_channel_call_destination=0x7ffa45d30390, grpc.internal.event_engine=0x5588c34465b0, grpc.internal.security_connector=0x5588c33cafb0, grpc.internal.subchannel_pool=0x5588c349a410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5588c3364a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:11.871775529+01:00"}), backing off for 1000 ms
00:17:41.134   19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # tmp0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:41.134    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # create_device nqn.2016-06.io.spdk:cnode1
00:17:41.134    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:41.134    19:16:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # jq -r .handle
00:17:41.393  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:41.393  I0000 00:00:1733508972.130799  578986 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:41.393  I0000 00:00:1733508972.132524  578986 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:41.393  I0000 00:00:1733508972.134071  578995 subchannel.cc:806] subchannel 0x561fdea96560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561fdeaacf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561fdea636e0, grpc.internal.client_channel_call_destination=0x7fdef478f390, grpc.internal.event_engine=0x561fdea925b0, grpc.internal.security_connector=0x561fdea16fb0, grpc.internal.subchannel_pool=0x561fdeae6410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561fde9b0a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:12.133549951+01:00"}), backing off for 1000 ms
00:17:41.393   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # tmp1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:17:41.393    19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # rpc_cmd nvmf_get_subsystems
00:17:41.393    19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # jq -r '. | length'
00:17:41.393    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.393    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:41.393    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:41.393   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # [[ 3 -eq 3 ]]
00:17:41.393   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@112 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:17:41.393   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@113 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode1 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:17:41.393   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@116 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:41.393   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:41.651  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:41.651  I0000 00:00:1733508972.421473  579018 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:41.651  I0000 00:00:1733508972.423351  579018 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:41.651  I0000 00:00:1733508972.424800  579019 subchannel.cc:806] subchannel 0x562aaf9eb560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562aafa01f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562aaf9b86e0, grpc.internal.client_channel_call_destination=0x7fce75d7b390, grpc.internal.event_engine=0x562aaf9e75b0, grpc.internal.security_connector=0x562aaf92bd60, grpc.internal.subchannel_pool=0x562aafa3b410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562aaf905a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:12.424292798+01:00"}), backing off for 999 ms
00:17:41.651  {}
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@117 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:41.651    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:41.651  [2024-12-06 19:16:12.466110] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0' does not exist
00:17:41.651  request:
00:17:41.651  {
00:17:41.651  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:17:41.651  "method": "nvmf_get_subsystems",
00:17:41.651  "req_id": 1
00:17:41.651  }
00:17:41.651  Got JSON-RPC error response
00:17:41.651  response:
00:17:41.651  {
00:17:41.651  "code": -19,
00:17:41.651  "message": "No such device"
00:17:41.651  }
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:41.651    19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # rpc_cmd nvmf_get_subsystems
00:17:41.651    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.651    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:41.651    19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # jq -r '. | length'
00:17:41.651    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # [[ 2 -eq 2 ]]
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@120 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:17:41.651   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:41.908  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:41.909  I0000 00:00:1733508972.737295  579047 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:41.909  I0000 00:00:1733508972.739262  579047 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:41.909  I0000 00:00:1733508972.740739  579048 subchannel.cc:806] subchannel 0x558603f8c560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558603fa2f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558603f596e0, grpc.internal.client_channel_call_destination=0x7fa3bc143390, grpc.internal.event_engine=0x558603f885b0, grpc.internal.security_connector=0x558603eccd60, grpc.internal.subchannel_pool=0x558603fdc410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558603ea6a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:12.740259728+01:00"}), backing off for 999 ms
00:17:41.909  {}
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@121 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:41.909    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:41.909  [2024-12-06 19:16:12.783068] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode1' does not exist
00:17:41.909  request:
00:17:41.909  {
00:17:41.909  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:41.909  "method": "nvmf_get_subsystems",
00:17:41.909  "req_id": 1
00:17:41.909  }
00:17:41.909  Got JSON-RPC error response
00:17:41.909  response:
00:17:41.909  {
00:17:41.909  "code": -19,
00:17:41.909  "message": "No such device"
00:17:41.909  }
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:41.909    19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # rpc_cmd nvmf_get_subsystems
00:17:41.909    19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # jq -r '. | length'
00:17:41.909    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.909    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:41.909    19:16:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # [[ 1 -eq 1 ]]
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@125 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:41.909   19:16:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:42.166  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:42.166  I0000 00:00:1733508973.064057  579078 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:42.166  I0000 00:00:1733508973.065846  579078 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:42.166  I0000 00:00:1733508973.067385  579203 subchannel.cc:806] subchannel 0x55f0a1f38560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f0a1f4ef20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f0a1f056e0, grpc.internal.client_channel_call_destination=0x7f2d67b00390, grpc.internal.event_engine=0x55f0a1f345b0, grpc.internal.security_connector=0x55f0a1e78d60, grpc.internal.subchannel_pool=0x55f0a1f88410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f0a1e52a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:13.066864664+01:00"}), backing off for 1000 ms
00:17:42.166  {}
00:17:42.166   19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@126 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:17:42.166   19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:42.424  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:42.424  I0000 00:00:1733508973.334637  579223 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:42.424  I0000 00:00:1733508973.336672  579223 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:42.424  I0000 00:00:1733508973.338168  579224 subchannel.cc:806] subchannel 0x56290c144560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56290c15af20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56290c1116e0, grpc.internal.client_channel_call_destination=0x7fa4cf5bf390, grpc.internal.event_engine=0x56290c1405b0, grpc.internal.security_connector=0x56290c084d60, grpc.internal.subchannel_pool=0x56290c194410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56290c05ea60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:13.337673034+01:00"}), backing off for 1000 ms
00:17:42.424  {}
00:17:42.424    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # create_device nqn.2016-06.io.spdk:cnode0
00:17:42.424    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # jq -r .handle
00:17:42.424    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:42.681  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:42.681  I0000 00:00:1733508973.596493  579247 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:42.681  I0000 00:00:1733508973.598247  579247 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:42.681  I0000 00:00:1733508973.599695  579248 subchannel.cc:806] subchannel 0x55fd73e58560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55fd73e6ef20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55fd73e256e0, grpc.internal.client_channel_call_destination=0x7f60760e2390, grpc.internal.event_engine=0x55fd73e545b0, grpc.internal.security_connector=0x55fd73dd8fb0, grpc.internal.subchannel_pool=0x55fd73ea8410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55fd73d72a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:13.599225316+01:00"}), backing off for 999 ms
00:17:42.681  [2024-12-06 19:16:13.618407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:42.939   19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:42.939    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # create_device nqn.2016-06.io.spdk:cnode1
00:17:42.939    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # jq -r .handle
00:17:42.939    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:42.939  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:42.939  I0000 00:00:1733508973.871374  579271 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:42.939  I0000 00:00:1733508973.873211  579271 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:42.939  I0000 00:00:1733508973.874658  579272 subchannel.cc:806] subchannel 0x558643e87560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558643e9df20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558643e546e0, grpc.internal.client_channel_call_destination=0x7f056c6c3390, grpc.internal.event_engine=0x558643e835b0, grpc.internal.security_connector=0x558643e07fb0, grpc.internal.subchannel_pool=0x558643ed7410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558643da1a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:13.874195226+01:00"}), backing off for 999 ms
00:17:43.196   19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:17:43.196    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:43.196    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # jq -r '.[].uuid'
00:17:43.196    19:16:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:43.196    19:16:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:43.196    19:16:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:43.196   19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # uuid=858b9158-ab35-46b0-9410-adeea96d4c60
00:17:43.196   19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@134 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:43.196   19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:43.196    19:16:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:43.196    19:16:13 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:17:43.453  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:43.453  I0000 00:00:1733508974.236708  579314 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:43.453  I0000 00:00:1733508974.238436  579314 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:43.453  I0000 00:00:1733508974.239908  579431 subchannel.cc:806] subchannel 0x5574510eb560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557451101f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5574510b86e0, grpc.internal.client_channel_call_destination=0x7f6cdfa73390, grpc.internal.event_engine=0x5574510e75b0, grpc.internal.security_connector=0x5574510e7540, grpc.internal.subchannel_pool=0x55745113b410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557451005a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:14.239438965+01:00"}), backing off for 1000 ms
00:17:43.453  {}
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # jq -r '.[0].namespaces | length'
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:43.453   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # [[ 1 -eq 1 ]]
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # jq -r '.[0].namespaces | length'
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:43.453   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # [[ 0 -eq 0 ]]
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # jq -r '.[0].namespaces[0].uuid'
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:43.453    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:43.453   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # [[ 858b9158-ab35-46b0-9410-adeea96d4c60 == \8\5\8\b\9\1\5\8\-\a\b\3\5\-\4\6\b\0\-\9\4\1\0\-\a\d\e\e\a\9\6\d\4\c\6\0 ]]
00:17:43.453   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@140 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:43.453   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:43.709    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:43.709    19:16:14 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:17:43.966  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:43.966  I0000 00:00:1733508974.672743  579461 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:43.966  I0000 00:00:1733508974.674644  579461 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:43.966  I0000 00:00:1733508974.676233  579464 subchannel.cc:806] subchannel 0x55e352df0560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e352e06f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e352dbd6e0, grpc.internal.client_channel_call_destination=0x7f265ab26390, grpc.internal.event_engine=0x55e352dec5b0, grpc.internal.security_connector=0x55e352dec540, grpc.internal.subchannel_pool=0x55e352e40410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e352d0aa60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:14.675721557+01:00"}), backing off for 1000 ms
00:17:43.966  {}
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # jq -r '.[0].namespaces | length'
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:43.966   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # [[ 1 -eq 1 ]]
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # jq -r '.[0].namespaces | length'
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:43.966   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # [[ 0 -eq 0 ]]
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # jq -r '.[0].namespaces[0].uuid'
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:43.966   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # [[ 858b9158-ab35-46b0-9410-adeea96d4c60 == \8\5\8\b\9\1\5\8\-\a\b\3\5\-\4\6\b\0\-\9\4\1\0\-\a\d\e\e\a\9\6\d\4\c\6\0 ]]
00:17:43.966   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@146 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:43.966   19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:43.966    19:16:14 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:17:44.224  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:44.224  I0000 00:00:1733508975.107460  579493 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:44.224  I0000 00:00:1733508975.109348  579493 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:44.224  I0000 00:00:1733508975.110946  579532 subchannel.cc:806] subchannel 0x55b60c078560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b60c08ef20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b60c0456e0, grpc.internal.client_channel_call_destination=0x7f459565e390, grpc.internal.event_engine=0x55b60c0745b0, grpc.internal.security_connector=0x55b60bff8fb0, grpc.internal.subchannel_pool=0x55b60c0c8410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b60bf92a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:15.110387002+01:00"}), backing off for 1000 ms
00:17:44.224  {}
00:17:44.224    19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:44.224    19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # jq -r '.[0].namespaces | length'
00:17:44.224    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:44.224    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:44.224    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:44.480   19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # [[ 0 -eq 0 ]]
00:17:44.480    19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:17:44.480    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:44.480    19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # jq -r '.[0].namespaces | length'
00:17:44.480    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:44.480    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:44.480   19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # [[ 0 -eq 0 ]]
00:17:44.480   19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@151 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:44.480   19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:44.480    19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 858b9158-ab35-46b0-9410-adeea96d4c60
00:17:44.480    19:16:15 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:17:44.737  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:44.737  I0000 00:00:1733508975.504238  579653 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:44.737  I0000 00:00:1733508975.506310  579653 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:44.737  I0000 00:00:1733508975.507828  579660 subchannel.cc:806] subchannel 0x559c01fff560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559c02015f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559c01fcc6e0, grpc.internal.client_channel_call_destination=0x7f3092291390, grpc.internal.event_engine=0x559c01ffb5b0, grpc.internal.security_connector=0x559c01f7ffb0, grpc.internal.subchannel_pool=0x559c0204f410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559c01f19a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:16:15.507342109+01:00"}), backing off for 1000 ms
00:17:44.737  {}
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@153 -- # cleanup
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@13 -- # killprocess 578476
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 578476 ']'
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 578476
00:17:44.737    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:44.737    19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578476
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578476'
00:17:44.737  killing process with pid 578476
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 578476
00:17:44.737   19:16:15 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 578476
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@14 -- # killprocess 578477
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 578477 ']'
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 578477
00:17:47.262    19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:47.262    19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578477
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=python3
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578477'
00:17:47.262  killing process with pid 578477
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 578477
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 578477
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@154 -- # trap - SIGINT SIGTERM EXIT
00:17:47.262  
00:17:47.262  real	0m9.054s
00:17:47.262  user	0m12.651s
00:17:47.262  sys	0m1.547s
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:47.262   19:16:17 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:17:47.262  ************************************
00:17:47.262  END TEST sma_nvmf_tcp
00:17:47.262  ************************************
00:17:47.262   19:16:17 sma -- sma/sma.sh@12 -- # run_test sma_vfiouser_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:17:47.262   19:16:17 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:47.262   19:16:17 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:47.262   19:16:17 sma -- common/autotest_common.sh@10 -- # set +x
00:17:47.262  ************************************
00:17:47.262  START TEST sma_vfiouser_qemu
00:17:47.262  ************************************
00:17:47.262   19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:17:47.262  * Looking for test storage...
00:17:47.262  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:47.262     19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # lcov --version
00:17:47.262     19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@344 -- # case "$op" in
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@345 -- # : 1
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:47.262    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:47.262     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # decimal 1
00:17:47.262     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=1
00:17:47.262     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:47.262     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 1
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # decimal 2
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=2
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 2
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # return 0
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:47.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:47.263  		--rc genhtml_branch_coverage=1
00:17:47.263  		--rc genhtml_function_coverage=1
00:17:47.263  		--rc genhtml_legend=1
00:17:47.263  		--rc geninfo_all_blocks=1
00:17:47.263  		--rc geninfo_unexecuted_blocks=1
00:17:47.263  		
00:17:47.263  		'
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:47.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:47.263  		--rc genhtml_branch_coverage=1
00:17:47.263  		--rc genhtml_function_coverage=1
00:17:47.263  		--rc genhtml_legend=1
00:17:47.263  		--rc geninfo_all_blocks=1
00:17:47.263  		--rc geninfo_unexecuted_blocks=1
00:17:47.263  		
00:17:47.263  		'
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:47.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:47.263  		--rc genhtml_branch_coverage=1
00:17:47.263  		--rc genhtml_function_coverage=1
00:17:47.263  		--rc genhtml_legend=1
00:17:47.263  		--rc geninfo_all_blocks=1
00:17:47.263  		--rc geninfo_unexecuted_blocks=1
00:17:47.263  		
00:17:47.263  		'
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:47.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:47.263  		--rc genhtml_branch_coverage=1
00:17:47.263  		--rc genhtml_function_coverage=1
00:17:47.263  		--rc genhtml_legend=1
00:17:47.263  		--rc geninfo_all_blocks=1
00:17:47.263  		--rc geninfo_unexecuted_blocks=1
00:17:47.263  		
00:17:47.263  		'
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vfio_user/common.sh@6 -- # : 128
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vfio_user/common.sh@7 -- # : 512
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@6 -- # : false
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@9 -- # : qemu-img
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:17:47.263       19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:17:47.263     19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:17:47.263      19:16:17 sma.sma_vfiouser_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:17:47.263       19:16:17 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:17:47.263        19:16:17 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:17:47.263        19:16:17 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:17:47.263        19:16:17 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:17:47.263        19:16:17 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:17:47.263       19:16:17 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@104 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@107 -- # VM_PASSWORD=root
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@108 -- # vm_no=0
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@110 -- # VFO_ROOT_PATH=/tmp/sma/vfio-user/qemu
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@112 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@113 -- # mkdir -p /tmp/sma/vfio-user/qemu
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@116 -- # used_vms=0
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@117 -- # vm_kill_all
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:17:47.263    19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 1
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:17:47.263   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:17:47.264   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:17:47.264   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@446 -- # return 0
00:17:47.264   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:17:47.264   19:16:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@119 -- # vm_setup --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disk-type=virtio --force=0 '--qemu-args=-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:17:47.264   19:16:17 sma.sma_vfiouser_qemu -- vhost/common.sh@518 -- # xtrace_disable
00:17:47.264   19:16:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:17:47.264  INFO: Creating new VM in /root/vhost_test/vms/0
00:17:47.264  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:17:47.264  INFO: TASK MASK: 1-2
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@671 -- # local node_num=0
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@672 -- # local boot_disk_present=false
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:17:47.264  INFO: NUMA NODE: 0
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@677 -- # [[ -n '' ]]
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@686 -- # [[ -z '' ]]
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # IFS=,
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # read -r disk disk_type _
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # [[ -z '' ]]
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # disk_type=virtio
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@704 -- # case $disk_type in
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:17:47.264  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:17:47.264   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:17:47.836  1024+0 records in
00:17:47.836  1024+0 records out
00:17:47.836  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.450843 s, 2.4 GB/s
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@780 -- # [[ -n '' ]]
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # (( 1 ))
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:17:47.836  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # cat
00:17:47.836    19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:17:47.836   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@827 -- # echo 10000
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@828 -- # echo 10001
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@829 -- # echo 10002
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@832 -- # [[ -z '' ]]
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@834 -- # echo 10004
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@835 -- # echo 100
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@837 -- # [[ -z '' ]]
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@838 -- # [[ -z '' ]]
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@124 -- # vm_run 0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@843 -- # local run_all=false
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@844 -- # local vms_to_run=
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@846 -- # getopts a-: optchar
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@856 -- # false
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@859 -- # shift 0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@860 -- # for vm in "$@"
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@871 -- # vm_is_running 0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@373 -- # return 1
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:17:47.837  INFO: running /root/vhost_test/vms/0/run.sh
00:17:47.837   19:16:18 sma.sma_vfiouser_qemu -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:17:47.837  Running VM in /root/vhost_test/vms/0
00:17:47.837  Waiting for QEMU pid file
00:17:48.928  === qemu.log ===
00:17:48.928  === qemu.log ===
00:17:48.928   19:16:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@125 -- # vm_wait_for_boot 300 0
00:17:48.928   19:16:19 sma.sma_vfiouser_qemu -- vhost/common.sh@913 -- # assert_number 300
00:17:48.928   19:16:19 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:17:48.928   19:16:19 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # return 0
00:17:48.928   19:16:19 sma.sma_vfiouser_qemu -- vhost/common.sh@915 -- # xtrace_disable
00:17:48.928   19:16:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:17:48.928  INFO: Waiting for VMs to boot
00:17:48.928  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:18:10.872  
00:18:10.872  INFO: VM0 ready
00:18:10.872  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:10.872  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:10.872  INFO: all VMs ready
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- vhost/common.sh@973 -- # return 0
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@129 -- # tgtpid=582920
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@128 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@130 -- # waitforlisten 582920
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@835 -- # '[' -z 582920 ']'
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:10.872  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:10.872   19:16:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:11.130  [2024-12-06 19:16:41.856642] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:18:11.130  [2024-12-06 19:16:41.856787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582920 ]
00:18:11.130  EAL: No free 2048 kB hugepages reported on node 1
00:18:11.130  [2024-12-06 19:16:41.992199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:11.388  [2024-12-06 19:16:42.110755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@868 -- # return 0
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@133 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@134 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:11.954  [2024-12-06 19:16:42.801399] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@135 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:11.954  [2024-12-06 19:16:42.809410] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@136 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:11.954  [2024-12-06 19:16:42.817460] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@137 -- # rpc_cmd framework_start_init
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.954   19:16:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:12.212  [2024-12-06 19:16:43.058907] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@140 -- # rpc_cmd bdev_null_create null0 100 4096
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:12.779  null0
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@141 -- # rpc_cmd bdev_null_create null1 100 4096
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:12.779  null1
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@160 -- # smapid=583184
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@163 -- # sma_waitforlisten
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/common.sh@8 -- # local sma_port=8080
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i = 0 ))
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:12.779    19:16:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # cat
00:18:12.779   19:16:43 sma.sma_vfiouser_qemu -- sma/common.sh@14 -- # sleep 1s
00:18:13.037  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:13.037  I0000 00:00:1733509003.952810  583184 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i++ ))
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- sma/common.sh@12 -- # return 0
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@166 -- # rpc_cmd nvmf_get_transports --trtype VFIOUSER
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:13.970  [
00:18:13.970  {
00:18:13.970  "trtype": "VFIOUSER",
00:18:13.970  "max_queue_depth": 256,
00:18:13.970  "max_io_qpairs_per_ctrlr": 127,
00:18:13.970  "in_capsule_data_size": 0,
00:18:13.970  "max_io_size": 131072,
00:18:13.970  "io_unit_size": 131072,
00:18:13.970  "max_aq_depth": 32,
00:18:13.970  "num_shared_buffers": 0,
00:18:13.970  "buf_cache_size": 0,
00:18:13.970  "dif_insert_or_strip": false,
00:18:13.970  "zcopy": false,
00:18:13.970  "abort_timeout_sec": 0,
00:18:13.970  "ack_timeout": 0,
00:18:13.970  "data_wr_pool_size": 0
00:18:13.970  }
00:18:13.970  ]
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@169 -- # vm_exec 0 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:13.970    19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:13.970    19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:13.970    19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:13.970    19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:13.970    19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:13.970    19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:13.970   19:16:44 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:18:13.970  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:14.228    19:16:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # create_device 0 0
00:18:14.228    19:16:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # jq -r .handle
00:18:14.228    19:16:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:18:14.228    19:16:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:14.228    19:16:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:14.228  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:14.228  I0000 00:00:1733509005.151343  583361 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:14.228  I0000 00:00:1733509005.153039  583361 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:14.228  [2024-12-06 19:16:45.156106] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:18:14.486   19:16:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:14.486   19:16:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@173 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:14.486   19:16:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:14.486   19:16:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:14.486  [
00:18:14.486  {
00:18:14.486  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:18:14.486  "subtype": "NVMe",
00:18:14.486  "listen_addresses": [
00:18:14.486  {
00:18:14.486  "trtype": "VFIOUSER",
00:18:14.486  "adrfam": "IPv4",
00:18:14.486  "traddr": "/var/tmp/vfiouser-0",
00:18:14.486  "trsvcid": ""
00:18:14.486  }
00:18:14.486  ],
00:18:14.486  "allow_any_host": true,
00:18:14.486  "hosts": [],
00:18:14.486  "serial_number": "00000000000000000000",
00:18:14.486  "model_number": "SPDK bdev Controller",
00:18:14.486  "max_namespaces": 32,
00:18:14.486  "min_cntlid": 1,
00:18:14.486  "max_cntlid": 65519,
00:18:14.486  "namespaces": []
00:18:14.486  }
00:18:14.486  ]
00:18:14.486   19:16:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:14.486   19:16:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@174 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-0
00:18:14.486   19:16:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:18:14.743  [2024-12-06 19:16:45.459459] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:18:15.675    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:15.675    19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:15.675    19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:15.676     19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:15.676     19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:15.676     19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:15.676     19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:15.676     19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:15.676     19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:15.676  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:15.676   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:18:15.676   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # rpc_cmd nvmf_get_subsystems
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # jq -r '. | length'
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:15.676   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # [[ 2 -eq 2 ]]
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # create_device 1 0
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # jq -r .handle
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:15.676    19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:15.933  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:15.933  I0000 00:00:1733509006.744996  583539 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:15.933  I0000 00:00:1733509006.746833  583539 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:15.933  [2024-12-06 19:16:46.752959] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@180 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:16.192  [
00:18:16.192  {
00:18:16.192  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:18:16.192  "subtype": "NVMe",
00:18:16.192  "listen_addresses": [
00:18:16.192  {
00:18:16.192  "trtype": "VFIOUSER",
00:18:16.192  "adrfam": "IPv4",
00:18:16.192  "traddr": "/var/tmp/vfiouser-0",
00:18:16.192  "trsvcid": ""
00:18:16.192  }
00:18:16.192  ],
00:18:16.192  "allow_any_host": true,
00:18:16.192  "hosts": [],
00:18:16.192  "serial_number": "00000000000000000000",
00:18:16.192  "model_number": "SPDK bdev Controller",
00:18:16.192  "max_namespaces": 32,
00:18:16.192  "min_cntlid": 1,
00:18:16.192  "max_cntlid": 65519,
00:18:16.192  "namespaces": []
00:18:16.192  }
00:18:16.192  ]
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@181 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:16.192  [
00:18:16.192  {
00:18:16.192  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:18:16.192  "subtype": "NVMe",
00:18:16.192  "listen_addresses": [
00:18:16.192  {
00:18:16.192  "trtype": "VFIOUSER",
00:18:16.192  "adrfam": "IPv4",
00:18:16.192  "traddr": "/var/tmp/vfiouser-1",
00:18:16.192  "trsvcid": ""
00:18:16.192  }
00:18:16.192  ],
00:18:16.192  "allow_any_host": true,
00:18:16.192  "hosts": [],
00:18:16.192  "serial_number": "00000000000000000000",
00:18:16.192  "model_number": "SPDK bdev Controller",
00:18:16.192  "max_namespaces": 32,
00:18:16.192  "min_cntlid": 1,
00:18:16.192  "max_cntlid": 65519,
00:18:16.192  "namespaces": []
00:18:16.192  }
00:18:16.192  ]
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@182 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 != \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@183 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-1
00:18:16.192   19:16:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:18:16.192  [2024-12-06 19:16:47.008014] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:18:17.124    19:16:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:17.124    19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:17.124    19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:17.124    19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:17.124    19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:17.124    19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:17.124     19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:17.124     19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:17.124     19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:17.124     19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:17.124     19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:17.124     19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:17.124    19:16:47 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:17.124  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:17.382   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme1/subsysnqn
00:18:17.382   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme1/subsysnqn ]]
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # rpc_cmd nvmf_get_subsystems
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # jq -r '. | length'
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:17.382   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # [[ 3 -eq 3 ]]
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # create_device 0 0
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # jq -r .handle
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:17.382    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:17.641  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:17.641  I0000 00:00:1733509008.423361  583718 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:17.641  I0000 00:00:1733509008.425231  583718 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:17.641   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # tmp0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:17.641    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # create_device 1 0
00:18:17.641    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # jq -r .handle
00:18:17.641    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:18:17.641    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:17.641    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:17.899  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:17.899  I0000 00:00:1733509008.738857  583861 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:17.899  I0000 00:00:1733509008.740799  583861 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:17.899   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # tmp1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # vm_count_nvme 0
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:17.899     19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:17.899     19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:17.899     19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:17.899     19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:17.899     19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:17.899     19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:17.899    19:16:48 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:18:17.899  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:18.157   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # [[ 2 -eq 2 ]]
00:18:18.157    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # rpc_cmd nvmf_get_subsystems
00:18:18.157    19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # jq -r '. | length'
00:18:18.157    19:16:48 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.157    19:16:48 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:18.157    19:16:48 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:18.157   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # [[ 3 -eq 3 ]]
00:18:18.157   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@196 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\0 ]]
00:18:18.157   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@197 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-1 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:18:18.157   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@200 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:18.157   19:16:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:18.416  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:18.416  I0000 00:00:1733509009.195677  583897 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:18.416  I0000 00:00:1733509009.197586  583897 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:18.416  {}
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@201 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:18.416  [2024-12-06 19:16:49.241184] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:18:18.416  request:
00:18:18.416  {
00:18:18.416  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:18:18.416  "method": "nvmf_get_subsystems",
00:18:18.416  "req_id": 1
00:18:18.416  }
00:18:18.416  Got JSON-RPC error response
00:18:18.416  response:
00:18:18.416  {
00:18:18.416  "code": -19,
00:18:18.416  "message": "No such device"
00:18:18.416  }
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@202 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:18.416  [
00:18:18.416  {
00:18:18.416  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:18:18.416  "subtype": "NVMe",
00:18:18.416  "listen_addresses": [
00:18:18.416  {
00:18:18.416  "trtype": "VFIOUSER",
00:18:18.416  "adrfam": "IPv4",
00:18:18.416  "traddr": "/var/tmp/vfiouser-1",
00:18:18.416  "trsvcid": ""
00:18:18.416  }
00:18:18.416  ],
00:18:18.416  "allow_any_host": true,
00:18:18.416  "hosts": [],
00:18:18.416  "serial_number": "00000000000000000000",
00:18:18.416  "model_number": "SPDK bdev Controller",
00:18:18.416  "max_namespaces": 32,
00:18:18.416  "min_cntlid": 1,
00:18:18.416  "max_cntlid": 65519,
00:18:18.416  "namespaces": []
00:18:18.416  }
00:18:18.416  ]
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # rpc_cmd nvmf_get_subsystems
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # jq -r '. | length'
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:18.416   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # [[ 2 -eq 2 ]]
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # vm_count_nvme 0
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:18.416     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:18.416     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:18.416     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:18.416     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:18.416     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:18.416     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:18.416    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:18:18.416  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:18.674   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # [[ 1 -eq 1 ]]
00:18:18.674   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@206 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:18:18.674   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:18.933  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:18.933  I0000 00:00:1733509009.688330  584051 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:18.933  I0000 00:00:1733509009.690118  584051 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:18.933  {}
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@207 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:18.933  [2024-12-06 19:16:49.738713] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:18:18.933  request:
00:18:18.933  {
00:18:18.933  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:18:18.933  "method": "nvmf_get_subsystems",
00:18:18.933  "req_id": 1
00:18:18.933  }
00:18:18.933  Got JSON-RPC error response
00:18:18.933  response:
00:18:18.933  {
00:18:18.933  "code": -19,
00:18:18.933  "message": "No such device"
00:18:18.933  }
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@208 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:18.933  [2024-12-06 19:16:49.750783] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:18:18.933  request:
00:18:18.933  {
00:18:18.933  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:18:18.933  "method": "nvmf_get_subsystems",
00:18:18.933  "req_id": 1
00:18:18.933  }
00:18:18.933  Got JSON-RPC error response
00:18:18.933  response:
00:18:18.933  {
00:18:18.933  "code": -19,
00:18:18.933  "message": "No such device"
00:18:18.933  }
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # rpc_cmd nvmf_get_subsystems
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # jq -r '. | length'
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:18.933   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # [[ 1 -eq 1 ]]
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # vm_count_nvme 0
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:18.933     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:18.933     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:18.933     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:18.933     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:18.933     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:18.933     19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:18.933    19:16:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:18:18.933  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:19.191   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # [[ 0 -eq 0 ]]
00:18:19.191   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@213 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:19.191   19:16:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:19.449  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:19.449  I0000 00:00:1733509010.184395  584095 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:19.449  I0000 00:00:1733509010.186424  584095 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:19.449  [2024-12-06 19:16:50.192230] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:18:19.449  {}
00:18:19.449   19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@214 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:18:19.449   19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:19.707  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:19.707  I0000 00:00:1733509010.467391  584116 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:19.707  I0000 00:00:1733509010.469322  584116 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:19.707  [2024-12-06 19:16:50.472982] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:18:19.707  {}
00:18:19.707    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # create_device 0 0
00:18:19.707    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:18:19.707    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # jq -r .handle
00:18:19.707    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:19.707    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:19.964  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:19.964  I0000 00:00:1733509010.734662  584267 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:19.964  I0000 00:00:1733509010.736440  584267 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:19.964  [2024-12-06 19:16:50.741699] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:18:19.964   19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:19.964    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # create_device 1 0
00:18:19.964    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # jq -r .handle
00:18:19.964    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:18:19.964    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:19.964    19:16:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:20.229  [2024-12-06 19:16:51.001635] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:18:20.229  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:20.229  I0000 00:00:1733509011.129685  584294 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:20.229  I0000 00:00:1733509011.131399  584294 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:20.229  [2024-12-06 19:16:51.135029] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:18:20.490   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # jq -r '.[].uuid'
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:20.490   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # uuid0=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # rpc_cmd bdev_get_bdevs -b null1
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # jq -r '.[].uuid'
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:20.490   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # uuid1=b40734f8-814c-41a6-b636-835d0ba1e204
00:18:20.490   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@223 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:20.490   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:20.490    19:16:51 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:20.490  [2024-12-06 19:16:51.404067] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:18:20.748  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:20.748  I0000 00:00:1733509011.645302  584323 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:20.748  I0000 00:00:1733509011.647252  584323 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:20.748  {}
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # jq -r '.[0].namespaces | length'
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.005   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # [[ 1 -eq 1 ]]
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.005    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # jq -r '.[0].namespaces | length'
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.006   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # [[ 0 -eq 0 ]]
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # jq -r '.[0].namespaces[0].uuid'
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.006   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # [[ 1500d3d9-ce39-4076-8fba-046cb258f9c1 == \1\5\0\0\d\3\d\9\-\c\e\3\9\-\4\0\7\6\-\8\f\b\a\-\0\4\6\c\b\2\5\8\f\9\c\1 ]]
00:18:21.006   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@227 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:21.006   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:21.006   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:18:21.006   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:21.006     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:21.006     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:21.006     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:21.006     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:21.006     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:21.006     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:21.006    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:21.006  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:21.263   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:18:21.263   19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:18:21.263    19:16:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:21.263    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:21.263    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:21.263    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:21.263    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:21.263    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:21.263     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:21.263     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:21.263     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:21.263     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:21.263     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:21.263     19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:21.263    19:16:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:21.263  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:21.263   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:18:21.263   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:18:21.263   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@229 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:21.263   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:21.263    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:21.263    19:16:52 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:21.519  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:21.519  I0000 00:00:1733509012.416468  584501 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:21.519  I0000 00:00:1733509012.418449  584501 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:21.519  {}
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # jq -r '.[0].namespaces | length'
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # [[ 1 -eq 1 ]]
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # jq -r '.[0].namespaces | length'
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # [[ 1 -eq 1 ]]
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # jq -r '.[0].namespaces[0].uuid'
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # [[ 1500d3d9-ce39-4076-8fba-046cb258f9c1 == \1\5\0\0\d\3\d\9\-\c\e\3\9\-\4\0\7\6\-\8\f\b\a\-\0\4\6\c\b\2\5\8\f\9\c\1 ]]
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # jq -r '.[0].namespaces[0].uuid'
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # [[ b40734f8-814c-41a6-b636-835d0ba1e204 == \b\4\0\7\3\4\f\8\-\8\1\4\c\-\4\1\a\6\-\b\6\3\6\-\8\3\5\d\0\b\a\1\e\2\0\4 ]]
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@234 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:18:21.775   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=b40734f8-814c-41a6-b636-835d0ba1e204
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:21.775     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:21.775     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:21.775     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:21.775     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:21.775     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:21.775     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:21.775    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:21.775  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:22.032   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:18:22.032   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:18:22.032    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:22.032    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:22.032    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:22.032    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:22.032    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:22.032    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:22.032     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:22.033     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:22.033     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:22.033     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:22.033     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:22.033     19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:22.033    19:16:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:22.033  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:22.033   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:18:22.033   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:18:22.033   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@237 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:22.033   19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:22.033    19:16:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:22.033    19:16:52 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:22.289  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:22.289  I0000 00:00:1733509013.209377  584681 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:22.289  I0000 00:00:1733509013.211324  584681 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:22.547  {}
00:18:22.547   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@238 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:22.547   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:22.547    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:22.547    19:16:53 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:22.805  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:22.805  I0000 00:00:1733509013.549928  584705 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:22.805  I0000 00:00:1733509013.551958  584705 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:22.805  {}
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # jq -r '.[0].namespaces | length'
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:22.805   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # [[ 1 -eq 1 ]]
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # jq -r '.[0].namespaces | length'
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:22.805   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # [[ 1 -eq 1 ]]
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # jq -r '.[0].namespaces[0].uuid'
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:22.805   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # [[ 1500d3d9-ce39-4076-8fba-046cb258f9c1 == \1\5\0\0\d\3\d\9\-\c\e\3\9\-\4\0\7\6\-\8\f\b\a\-\0\4\6\c\b\2\5\8\f\9\c\1 ]]
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # jq -r '.[0].namespaces[0].uuid'
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:22.805    19:16:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:23.062   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # [[ b40734f8-814c-41a6-b636-835d0ba1e204 == \b\4\0\7\3\4\f\8\-\8\1\4\c\-\4\1\a\6\-\b\6\3\6\-\8\3\5\d\0\b\a\1\e\2\0\4 ]]
00:18:23.062   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@243 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:23.062   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:23.062   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:18:23.062   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:23.062  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:23.062   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:18:23.062   19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:23.062     19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:23.062    19:16:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:23.062  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@244 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=b40734f8-814c-41a6-b636-835d0ba1e204
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:23.320  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:18:23.320   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:23.320     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:23.320    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:23.320  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@245 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=b40734f8-814c-41a6-b636-835d0ba1e204
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:23.578     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:23.578     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:23.578     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.578     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.578     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:23.578     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:23.578    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:23.578  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:18:23.578   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:18:23.836    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:23.836    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:23.836    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.836    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.836    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:23.837  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@246 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:18:23.837   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:23.837     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:23.837    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:23.837  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:24.095     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:24.095     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:24.095     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:24.095     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:24.095     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:24.095     19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:24.095  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@249 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:24.095   19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:24.095    19:16:54 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:24.661  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:24.661  I0000 00:00:1733509015.328183  585013 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:24.661  I0000 00:00:1733509015.330114  585013 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:24.661  {}
00:18:24.661   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@250 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:24.661   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:24.661    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:24.661    19:16:55 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:24.918  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:24.918  I0000 00:00:1733509015.702556  585078 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:24.918  I0000 00:00:1733509015.705023  585078 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:24.918  {}
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # jq -r '.[0].namespaces | length'
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:24.918   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # [[ 1 -eq 1 ]]
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # jq -r '.[0].namespaces | length'
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:24.918   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # [[ 1 -eq 1 ]]
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # jq -r '.[0].namespaces[0].uuid'
00:18:24.918    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:25.176   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # [[ 1500d3d9-ce39-4076-8fba-046cb258f9c1 == \1\5\0\0\d\3\d\9\-\c\e\3\9\-\4\0\7\6\-\8\f\b\a\-\0\4\6\c\b\2\5\8\f\9\c\1 ]]
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # jq -r '.[0].namespaces[0].uuid'
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:25.176   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # [[ b40734f8-814c-41a6-b636-835d0ba1e204 == \b\4\0\7\3\4\f\8\-\8\1\4\c\-\4\1\a\6\-\b\6\3\6\-\8\3\5\d\0\b\a\1\e\2\0\4 ]]
00:18:25.176   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@255 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:25.176   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:25.176   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:18:25.176   19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:25.176    19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:25.176     19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:25.176     19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:25.176     19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.176     19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.176     19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:25.177     19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:25.177    19:16:55 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:25.177  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:25.177   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:18:25.177   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:18:25.177    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:25.177    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:25.177    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.177    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.177    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:25.177    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:25.177     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:25.177     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:25.177     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.177     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.177     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:25.177     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:25.177    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:25.434  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:25.434   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:18:25.434   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@256 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:18:25.435   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=b40734f8-814c-41a6-b636-835d0ba1e204
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:25.435     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:25.435     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:25.435     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.435     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.435     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:25.435     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:25.435    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:25.435  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:25.694  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@257 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:18:25.694   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=b40734f8-814c-41a6-b636-835d0ba1e204
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:25.694     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:25.694    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:25.694  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:25.953  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@258 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:18:25.953   19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:25.953     19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:25.953    19:16:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:25.953  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:26.211   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:18:26.211   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:18:26.211    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:26.211    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:26.211    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:26.211    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:26.211    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:26.211    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:26.211     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:26.211     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:26.211     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:26.211     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:26.211     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:26.211     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:26.211    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:26.211  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@261 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:26.469   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:26.469    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:26.469    19:16:57 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:26.726  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:26.726  I0000 00:00:1733509017.440971  585309 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:26.726  I0000 00:00:1733509017.442923  585309 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:26.726  {}
00:18:26.726   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@262 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:26.726   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:26.726    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:26.726    19:16:57 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:26.984  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:26.985  I0000 00:00:1733509017.772763  585455 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:26.985  I0000 00:00:1733509017.774494  585455 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:26.985  {}
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # jq -r '.[0].namespaces | length'
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # [[ 0 -eq 0 ]]
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # jq -r '.[0].namespaces | length'
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # [[ 0 -eq 0 ]]
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@265 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:18:26.985   19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:26.985     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:26.985     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:26.985     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:26.985     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:26.985     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:26.985     19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:26.985    19:16:57 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:18:27.243  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:27.243   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:18:27.243   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:18:27.243    19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:27.243    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:27.243    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:27.243    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:27.243    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:27.243    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:27.243     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:27.243     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:27.243     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:27.243     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:27.243     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:27.243     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:27.243    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 1500d3d9-ce39-4076-8fba-046cb258f9c1 /sys/class/nvme/nvme0/nvme*/uuid'
00:18:27.243  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:27.502  grep: /sys/class/nvme/nvme0/nvme*/uuid: No such file or directory
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@266 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=b40734f8-814c-41a6-b636-835d0ba1e204
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:18:27.502  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:18:27.502   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:27.502     19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:27.502    19:16:58 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l b40734f8-814c-41a6-b636-835d0ba1e204 /sys/class/nvme/nvme1/nvme*/uuid'
00:18:27.502  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:27.761  grep: /sys/class/nvme/nvme1/nvme*/uuid: No such file or directory
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@269 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:27.761   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:27.761    19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:27.761    19:16:58 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:28.020  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:28.020  I0000 00:00:1733509018.818056  585643 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:28.020  I0000 00:00:1733509018.819862  585643 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:28.020  {}
00:18:28.020   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@270 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:28.020   19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:28.020    19:16:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:28.020    19:16:58 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:28.278  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:28.278  I0000 00:00:1733509019.141101  585673 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:28.278  I0000 00:00:1733509019.142959  585673 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:28.278  {}
00:18:28.278   19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@271 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:28.278   19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:28.278    19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 b40734f8-814c-41a6-b636-835d0ba1e204
00:18:28.278    19:16:59 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:28.536  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:28.536  I0000 00:00:1733509019.459359  585697 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:28.536  I0000 00:00:1733509019.461333  585697 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:28.795  {}
00:18:28.796   19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@272 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:28.796   19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:28.796    19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:28.796    19:16:59 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:29.054  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:29.054  I0000 00:00:1733509019.794910  585799 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:29.054  I0000 00:00:1733509019.796952  585799 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:29.054  {}
00:18:29.054   19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@274 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:29.054   19:16:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:29.312  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:29.312  I0000 00:00:1733509020.119424  585872 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:29.312  I0000 00:00:1733509020.121404  585872 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:29.312  {}
00:18:29.312   19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@275 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:18:29.312   19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:29.570  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:29.570  I0000 00:00:1733509020.398958  585899 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:29.570  I0000 00:00:1733509020.400813  585899 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:29.570  {}
00:18:29.570    19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # create_device 42 0
00:18:29.570    19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # jq -r .handle
00:18:29.570    19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=42
00:18:29.570    19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:29.570    19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:29.828  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:29.828  I0000 00:00:1733509020.687324  585924 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:29.828  I0000 00:00:1733509020.689326  585924 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:29.828  [2024-12-06 19:17:00.695670] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-42' does not exist
00:18:30.086   19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # device3=nvme:nqn.2016-06.io.spdk:vfiouser-42
00:18:30.086   19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@279 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:18:30.086   19:17:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:18:30.086  [2024-12-06 19:17:00.971787] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-42: enabling controller
00:18:31.020    19:17:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:18:31.020    19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:31.020    19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:31.020    19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:31.020    19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:31.020    19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:31.020     19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:31.020     19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:31.020     19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:31.020     19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:31.020     19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:31.020     19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:31.020    19:17:01 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:18:31.020  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:31.278   19:17:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:18:31.278   19:17:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:18:31.278   19:17:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@282 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-42
00:18:31.278   19:17:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:31.537  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:31.537  I0000 00:00:1733509022.302634  586214 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:31.537  I0000 00:00:1733509022.304445  586214 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:31.537  {}
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@283 -- # NOT vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_nqn
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:31.537    19:17:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_nqn
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:18:31.537   19:17:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:18:32.472    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:18:32.472    19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:18:32.472    19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:32.472    19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:32.472    19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:18:32.472    19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:18:32.472     19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:18:32.472     19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:18:32.472     19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:32.472     19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:32.472     19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:18:32.472     19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:18:32.472    19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:18:32.472  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:18:32.731  grep: /sys/class/nvme/*/subsysnqn: No such file or directory
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z '' ]]
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@92 -- # error 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@82 -- # echo ===========
00:18:32.731  ===========
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@83 -- # message ERROR 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=ERROR
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:18:32.731  ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@84 -- # echo ===========
00:18:32.731  ===========
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- vhost/common.sh@86 -- # false
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@93 -- # return 1
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:32.731   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@285 -- # key0=1234567890abcdef1234567890abcdef
00:18:32.731    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # create_device 0 0
00:18:32.731    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:18:32.731    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # jq -r .handle
00:18:32.731    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:32.731    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:33.000  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:33.000  I0000 00:00:1733509023.749051  586381 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:33.000  I0000 00:00:1733509023.750866  586381 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:33.000  [2024-12-06 19:17:03.757277] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:18:33.000   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:33.000    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:33.000    19:17:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:33.000    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # jq -r '.[].uuid'
00:18:33.000    19:17:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:33.000    19:17:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:33.258   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # uuid0=1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:33.258   19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:33.258    19:17:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:33.258    19:17:03 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:33.258    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # get_cipher AES_CBC
00:18:33.258    19:17:04 sma.sma_vfiouser_qemu -- sma/common.sh@27 -- # case "$1" in
00:18:33.258    19:17:04 sma.sma_vfiouser_qemu -- sma/common.sh@28 -- # echo 0
00:18:33.258    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # format_key 1234567890abcdef1234567890abcdef
00:18:33.258    19:17:04 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:18:33.258     19:17:04 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:18:33.258  [2024-12-06 19:17:04.021171] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:18:33.516  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:33.516  I0000 00:00:1733509024.285356  586415 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:33.516  I0000 00:00:1733509024.287215  586415 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:33.516  {}
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # jq -r '.[0].namespaces[0].name'
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:33.516   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # ns_bdev=9fd178e0-883d-4d46-950b-71feb8f373cf
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # rpc_cmd bdev_get_bdevs -b 9fd178e0-883d-4d46-950b-71feb8f373cf
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # jq -r '.[0].product_name'
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:33.516   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # [[ crypto == \c\r\y\p\t\o ]]
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # rpc_cmd bdev_get_bdevs -b 9fd178e0-883d-4d46-950b-71feb8f373cf
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # jq -r '.[] | select(.product_name == "crypto")'
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:33.516    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:33.773   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # crypto_bdev='{
00:18:33.773    "name": "9fd178e0-883d-4d46-950b-71feb8f373cf",
00:18:33.774    "aliases": [
00:18:33.774      "46634548-de00-5b2e-81d7-4a7af1d3804c"
00:18:33.774    ],
00:18:33.774    "product_name": "crypto",
00:18:33.774    "block_size": 4096,
00:18:33.774    "num_blocks": 25600,
00:18:33.774    "uuid": "46634548-de00-5b2e-81d7-4a7af1d3804c",
00:18:33.774    "assigned_rate_limits": {
00:18:33.774      "rw_ios_per_sec": 0,
00:18:33.774      "rw_mbytes_per_sec": 0,
00:18:33.774      "r_mbytes_per_sec": 0,
00:18:33.774      "w_mbytes_per_sec": 0
00:18:33.774    },
00:18:33.774    "claimed": true,
00:18:33.774    "claim_type": "exclusive_write",
00:18:33.774    "zoned": false,
00:18:33.774    "supported_io_types": {
00:18:33.774      "read": true,
00:18:33.774      "write": true,
00:18:33.774      "unmap": false,
00:18:33.774      "flush": false,
00:18:33.774      "reset": true,
00:18:33.774      "nvme_admin": false,
00:18:33.774      "nvme_io": false,
00:18:33.774      "nvme_io_md": false,
00:18:33.774      "write_zeroes": true,
00:18:33.774      "zcopy": false,
00:18:33.774      "get_zone_info": false,
00:18:33.774      "zone_management": false,
00:18:33.774      "zone_append": false,
00:18:33.774      "compare": false,
00:18:33.774      "compare_and_write": false,
00:18:33.774      "abort": false,
00:18:33.774      "seek_hole": false,
00:18:33.774      "seek_data": false,
00:18:33.774      "copy": false,
00:18:33.774      "nvme_iov_md": false
00:18:33.774    },
00:18:33.774    "memory_domains": [
00:18:33.774      {
00:18:33.774        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:33.774        "dma_device_type": 2
00:18:33.774      }
00:18:33.774    ],
00:18:33.774    "driver_specific": {
00:18:33.774      "crypto": {
00:18:33.774        "base_bdev_name": "null0",
00:18:33.774        "name": "9fd178e0-883d-4d46-950b-71feb8f373cf",
00:18:33.774        "key_name": "9fd178e0-883d-4d46-950b-71feb8f373cf_AES_CBC"
00:18:33.774      }
00:18:33.774    }
00:18:33.774  }'
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # rpc_cmd bdev_get_bdevs
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:33.774   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # [[ 1 -eq 1 ]]
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # jq -r .driver_specific.crypto.key_name
00:18:33.774   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # key_name=9fd178e0-883d-4d46-950b-71feb8f373cf_AES_CBC
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # rpc_cmd accel_crypto_keys_get -k 9fd178e0-883d-4d46-950b-71feb8f373cf_AES_CBC
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:33.774   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # key_obj='[
00:18:33.774  {
00:18:33.774  "name": "9fd178e0-883d-4d46-950b-71feb8f373cf_AES_CBC",
00:18:33.774  "cipher": "AES_CBC",
00:18:33.774  "key": "1234567890abcdef1234567890abcdef"
00:18:33.774  }
00:18:33.774  ]'
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # jq -r '.[0].key'
00:18:33.774   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # jq -r '.[0].cipher'
00:18:33.774   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:18:33.774   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@317 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:33.774   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:33.774    19:17:04 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:34.032  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:34.032  I0000 00:00:1733509024.909828  586584 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:34.032  I0000 00:00:1733509024.911669  586584 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:34.032  {}
00:18:34.289   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@318 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:34.289   19:17:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:34.289  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:34.289  I0000 00:00:1733509025.221556  586614 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:34.289  I0000 00:00:1733509025.223345  586614 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:34.553  {}
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # rpc_cmd bdev_get_bdevs
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r '.[] | select(.product_name == "crypto")'
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r length
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:34.553   19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # [[ '' -eq 0 ]]
00:18:34.553   19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@322 -- # device_vfio_user=1
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # create_device 0 0
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # jq -r .handle
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:18:34.553    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:34.813  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:34.813  I0000 00:00:1733509025.530125  586762 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:34.813  I0000 00:00:1733509025.531889  586762 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:34.813  [2024-12-06 19:17:05.535422] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:18:34.813   19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:34.813   19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@324 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:34.813   19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:34.813    19:17:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:34.813    19:17:05 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:35.071  [2024-12-06 19:17:05.799309] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:18:35.071  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:35.071  I0000 00:00:1733509025.964265  586791 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:35.071  I0000 00:00:1733509025.966211  586791 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:35.071  {}
00:18:35.329   19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # diff /dev/fd/62 /dev/fd/61
00:18:35.329    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:18:35.329    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # get_qos_caps 1
00:18:35.329    19:17:06 sma.sma_vfiouser_qemu -- sma/common.sh@45 -- # local rootdir
00:18:35.329    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:18:35.329     19:17:06 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:18:35.329    19:17:06 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:18:35.329    19:17:06 sma.sma_vfiouser_qemu -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:18:35.329  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:35.329  I0000 00:00:1733509026.271540  586829 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:35.329  I0000 00:00:1733509026.273315  586829 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:35.587   19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:35.587    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:35.587    19:17:06 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:35.844  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:35.844  I0000 00:00:1733509026.576766  586961 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:35.844  I0000 00:00:1733509026.578510  586961 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:35.844  {}
00:18:35.844   19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # diff /dev/fd/62 /dev/fd/61
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys '.[].assigned_rate_limits'
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:35.844   19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@370 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:35.844   19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 1500d3d9-ce39-4076-8fba-046cb258f9c1
00:18:35.844    19:17:06 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:18:36.124  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:36.124  I0000 00:00:1733509026.962632  587003 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:36.124  I0000 00:00:1733509026.964438  587003 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:36.124  {}
00:18:36.124   19:17:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@371 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:18:36.124   19:17:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:36.382  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:36.382  I0000 00:00:1733509027.255646  587032 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:36.382  I0000 00:00:1733509027.257432  587032 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:36.382  {}
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@373 -- # cleanup
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@98 -- # vm_kill_all
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:18:36.382    19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:18:36.382    19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:18:36.382    19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:18:36.382    19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:18:36.382    19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:18:36.382    19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 0
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:18:36.382   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@449 -- # local vm_pid
00:18:36.383    19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # vm_pid=580185
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=580185)'
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=580185)'
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=580185)'
00:18:36.383  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=580185)
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@454 -- # /bin/kill 580185
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@455 -- # notice 'process 580185 killed'
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'process 580185 killed'
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: process 580185 killed'
00:18:36.383  INFO: process 580185 killed
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@99 -- # killprocess 582920
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 582920 ']'
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 582920
00:18:36.383    19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:18:36.383   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:36.383    19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 582920
00:18:36.640   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:36.640   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:36.640   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 582920'
00:18:36.641  killing process with pid 582920
00:18:36.641   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 582920
00:18:36.641   19:17:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 582920
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@100 -- # killprocess 583184
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 583184 ']'
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 583184
00:18:38.536    19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:38.536    19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 583184
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=python3
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 583184'
00:18:38.536  killing process with pid 583184
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 583184
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 583184
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # rm -rf /tmp/sma/vfio-user/qemu
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@374 -- # trap - SIGINT SIGTERM EXIT
00:18:38.536  
00:18:38.536  real	0m51.467s
00:18:38.536  user	0m38.758s
00:18:38.536  sys	0m3.720s
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:38.536   19:17:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:18:38.536  ************************************
00:18:38.536  END TEST sma_vfiouser_qemu
00:18:38.536  ************************************
00:18:38.536   19:17:09 sma -- sma/sma.sh@13 -- # run_test sma_plugins /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:18:38.536   19:17:09 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:18:38.536   19:17:09 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:38.536   19:17:09 sma -- common/autotest_common.sh@10 -- # set +x
00:18:38.536  ************************************
00:18:38.536  START TEST sma_plugins
00:18:38.536  ************************************
00:18:38.536   19:17:09 sma.sma_plugins -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:18:38.536  * Looking for test storage...
00:18:38.536  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:18:38.536    19:17:09 sma.sma_plugins -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:38.536     19:17:09 sma.sma_plugins -- common/autotest_common.sh@1711 -- # lcov --version
00:18:38.536     19:17:09 sma.sma_plugins -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:38.536    19:17:09 sma.sma_plugins -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@336 -- # IFS=.-:
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@336 -- # read -ra ver1
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@337 -- # IFS=.-:
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@337 -- # read -ra ver2
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@338 -- # local 'op=<'
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@340 -- # ver1_l=2
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@341 -- # ver2_l=1
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@344 -- # case "$op" in
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@345 -- # : 1
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@365 -- # decimal 1
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@353 -- # local d=1
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@355 -- # echo 1
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@365 -- # ver1[v]=1
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@366 -- # decimal 2
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@353 -- # local d=2
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:38.536     19:17:09 sma.sma_plugins -- scripts/common.sh@355 -- # echo 2
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@366 -- # ver2[v]=2
00:18:38.536    19:17:09 sma.sma_plugins -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:38.537    19:17:09 sma.sma_plugins -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:38.537    19:17:09 sma.sma_plugins -- scripts/common.sh@368 -- # return 0
00:18:38.537    19:17:09 sma.sma_plugins -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:38.537    19:17:09 sma.sma_plugins -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:38.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:38.537  		--rc genhtml_branch_coverage=1
00:18:38.537  		--rc genhtml_function_coverage=1
00:18:38.537  		--rc genhtml_legend=1
00:18:38.537  		--rc geninfo_all_blocks=1
00:18:38.537  		--rc geninfo_unexecuted_blocks=1
00:18:38.537  		
00:18:38.537  		'
00:18:38.537    19:17:09 sma.sma_plugins -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:38.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:38.537  		--rc genhtml_branch_coverage=1
00:18:38.537  		--rc genhtml_function_coverage=1
00:18:38.537  		--rc genhtml_legend=1
00:18:38.537  		--rc geninfo_all_blocks=1
00:18:38.537  		--rc geninfo_unexecuted_blocks=1
00:18:38.537  		
00:18:38.537  		'
00:18:38.537    19:17:09 sma.sma_plugins -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:38.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:38.537  		--rc genhtml_branch_coverage=1
00:18:38.537  		--rc genhtml_function_coverage=1
00:18:38.537  		--rc genhtml_legend=1
00:18:38.537  		--rc geninfo_all_blocks=1
00:18:38.537  		--rc geninfo_unexecuted_blocks=1
00:18:38.537  		
00:18:38.537  		'
00:18:38.537    19:17:09 sma.sma_plugins -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:38.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:38.537  		--rc genhtml_branch_coverage=1
00:18:38.537  		--rc genhtml_function_coverage=1
00:18:38.537  		--rc genhtml_legend=1
00:18:38.537  		--rc geninfo_all_blocks=1
00:18:38.537  		--rc geninfo_unexecuted_blocks=1
00:18:38.537  		
00:18:38.537  		'
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@28 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@31 -- # tgtpid=587394
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@30 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@43 -- # smapid=587395
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@45 -- # sma_waitforlisten
00:18:38.537   19:17:09 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:38.537   19:17:09 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@34 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:38.537   19:17:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:38.537   19:17:09 sma.sma_plugins -- sma/plugins.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:38.537    19:17:09 sma.sma_plugins -- sma/plugins.sh@34 -- # cat
00:18:38.537   19:17:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:38.537   19:17:09 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:38.795   19:17:09 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:38.796  [2024-12-06 19:17:09.578559] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:18:38.796  [2024-12-06 19:17:09.578701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587394 ]
00:18:38.796  EAL: No free 2048 kB hugepages reported on node 1
00:18:38.796  [2024-12-06 19:17:09.713676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:39.053  [2024-12-06 19:17:09.832880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:39.618   19:17:10 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:39.618   19:17:10 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:39.618   19:17:10 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:39.618   19:17:10 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:39.875  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:39.875  I0000 00:00:1733509030.737750  587395 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:40.808   19:17:11 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:40.808   19:17:11 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:40.808   19:17:11 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:40.808   19:17:11 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:40.808    19:17:11 sma.sma_plugins -- sma/plugins.sh@47 -- # create_device nvme
00:18:40.808    19:17:11 sma.sma_plugins -- sma/plugins.sh@47 -- # jq -r .handle
00:18:40.808    19:17:11 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:41.067  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:41.067  I0000 00:00:1733509031.786491  587697 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:41.067  I0000 00:00:1733509031.788246  587697 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:41.067   19:17:11 sma.sma_plugins -- sma/plugins.sh@47 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:18:41.067    19:17:11 sma.sma_plugins -- sma/plugins.sh@48 -- # create_device nvmf_tcp
00:18:41.067    19:17:11 sma.sma_plugins -- sma/plugins.sh@48 -- # jq -r .handle
00:18:41.067    19:17:11 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:41.325  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:41.325  I0000 00:00:1733509032.059051  587728 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:41.325  I0000 00:00:1733509032.060768  587728 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:41.325   19:17:12 sma.sma_plugins -- sma/plugins.sh@48 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:18:41.325   19:17:12 sma.sma_plugins -- sma/plugins.sh@50 -- # killprocess 587395
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 587395 ']'
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 587395
00:18:41.325    19:17:12 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:41.325    19:17:12 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587395
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587395'
00:18:41.325  killing process with pid 587395
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 587395
00:18:41.325   19:17:12 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 587395
00:18:41.325   19:17:12 sma.sma_plugins -- sma/plugins.sh@61 -- # smapid=587877
00:18:41.325   19:17:12 sma.sma_plugins -- sma/plugins.sh@62 -- # sma_waitforlisten
00:18:41.325   19:17:12 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:41.325   19:17:12 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:41.325   19:17:12 sma.sma_plugins -- sma/plugins.sh@53 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:41.325   19:17:12 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:41.325   19:17:12 sma.sma_plugins -- sma/plugins.sh@53 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:41.325    19:17:12 sma.sma_plugins -- sma/plugins.sh@53 -- # cat
00:18:41.325   19:17:12 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:41.325   19:17:12 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:41.325   19:17:12 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:41.585  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:41.585  I0000 00:00:1733509032.398848  587877 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:42.526   19:17:13 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:42.526   19:17:13 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:42.526   19:17:13 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:42.526   19:17:13 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:42.526    19:17:13 sma.sma_plugins -- sma/plugins.sh@64 -- # create_device nvmf_tcp
00:18:42.526    19:17:13 sma.sma_plugins -- sma/plugins.sh@64 -- # jq -r .handle
00:18:42.526    19:17:13 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:42.526  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:42.526  I0000 00:00:1733509033.453375  588039 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:42.526  I0000 00:00:1733509033.455298  588039 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:42.784   19:17:13 sma.sma_plugins -- sma/plugins.sh@64 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:18:42.784   19:17:13 sma.sma_plugins -- sma/plugins.sh@65 -- # NOT create_device nvme
00:18:42.784   19:17:13 sma.sma_plugins -- common/autotest_common.sh@652 -- # local es=0
00:18:42.784   19:17:13 sma.sma_plugins -- common/autotest_common.sh@654 -- # valid_exec_arg create_device nvme
00:18:42.784   19:17:13 sma.sma_plugins -- common/autotest_common.sh@640 -- # local arg=create_device
00:18:42.784   19:17:13 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:42.784    19:17:13 sma.sma_plugins -- common/autotest_common.sh@644 -- # type -t create_device
00:18:42.784   19:17:13 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:42.784   19:17:13 sma.sma_plugins -- common/autotest_common.sh@655 -- # create_device nvme
00:18:42.784   19:17:13 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:42.784  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:42.784  I0000 00:00:1733509033.708935  588068 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:42.784  I0000 00:00:1733509033.710789  588068 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:42.784  Traceback (most recent call last):
00:18:42.784    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:42.784      main(sys.argv[1:])
00:18:42.784    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:42.784      result = client.call(request['method'], request.get('params', {}))
00:18:42.784               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:42.784    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:42.784      response = func(request=json_format.ParseDict(params, input()))
00:18:42.784                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:42.784    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:42.784      return _end_unary_response_blocking(state, call, False, None)
00:18:42.784             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:42.784    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:42.784      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:42.784      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:42.785  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:42.785  	status = StatusCode.INVALID_ARGUMENT
00:18:42.785  	details = "Unsupported device type"
00:18:42.785  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Unsupported device type", grpc_status:3, created_time:"2024-12-06T19:17:13.712930265+01:00"}"
00:18:42.785  >
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@655 -- # es=1
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:43.043   19:17:13 sma.sma_plugins -- sma/plugins.sh@67 -- # killprocess 587877
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 587877 ']'
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 587877
00:18:43.043    19:17:13 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:43.043    19:17:13 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587877
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587877'
00:18:43.043  killing process with pid 587877
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 587877
00:18:43.043   19:17:13 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 587877
00:18:43.609   19:17:14 sma.sma_plugins -- sma/plugins.sh@80 -- # smapid=588218
00:18:43.609   19:17:14 sma.sma_plugins -- sma/plugins.sh@81 -- # sma_waitforlisten
00:18:43.609   19:17:14 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:43.609   19:17:14 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:43.609   19:17:14 sma.sma_plugins -- sma/plugins.sh@70 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:43.609    19:17:14 sma.sma_plugins -- sma/plugins.sh@70 -- # cat
00:18:43.609   19:17:14 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:43.609   19:17:14 sma.sma_plugins -- sma/plugins.sh@70 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:43.609   19:17:14 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:43.609   19:17:14 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:43.609   19:17:14 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:43.867  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:43.867  I0000 00:00:1733509034.612710  588218 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:44.433   19:17:15 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:44.433   19:17:15 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:44.433   19:17:15 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:44.690   19:17:15 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:44.690    19:17:15 sma.sma_plugins -- sma/plugins.sh@83 -- # create_device nvme
00:18:44.690    19:17:15 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:44.690    19:17:15 sma.sma_plugins -- sma/plugins.sh@83 -- # jq -r .handle
00:18:44.948  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:44.948  I0000 00:00:1733509035.650676  588362 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:44.948  I0000 00:00:1733509035.652625  588362 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:44.948   19:17:15 sma.sma_plugins -- sma/plugins.sh@83 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:18:44.948    19:17:15 sma.sma_plugins -- sma/plugins.sh@84 -- # create_device nvmf_tcp
00:18:44.948    19:17:15 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:44.948    19:17:15 sma.sma_plugins -- sma/plugins.sh@84 -- # jq -r .handle
00:18:45.206  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:45.206  I0000 00:00:1733509035.921862  588414 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:45.206  I0000 00:00:1733509035.923847  588414 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:45.206   19:17:15 sma.sma_plugins -- sma/plugins.sh@84 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:18:45.206   19:17:15 sma.sma_plugins -- sma/plugins.sh@86 -- # killprocess 588218
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 588218 ']'
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 588218
00:18:45.206    19:17:15 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:45.206    19:17:15 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588218
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588218'
00:18:45.206  killing process with pid 588218
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 588218
00:18:45.206   19:17:15 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 588218
00:18:45.206   19:17:16 sma.sma_plugins -- sma/plugins.sh@99 -- # smapid=588451
00:18:45.206   19:17:16 sma.sma_plugins -- sma/plugins.sh@100 -- # sma_waitforlisten
00:18:45.206   19:17:16 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:45.206   19:17:16 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:45.206   19:17:16 sma.sma_plugins -- sma/plugins.sh@89 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:45.206    19:17:16 sma.sma_plugins -- sma/plugins.sh@89 -- # cat
00:18:45.206   19:17:16 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:45.206   19:17:16 sma.sma_plugins -- sma/plugins.sh@89 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:45.206   19:17:16 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:45.206   19:17:16 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:45.206   19:17:16 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:45.464  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:45.464  I0000 00:00:1733509036.278148  588451 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:46.396   19:17:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:46.396   19:17:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:46.396   19:17:17 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:46.396   19:17:17 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:46.396    19:17:17 sma.sma_plugins -- sma/plugins.sh@102 -- # create_device nvme
00:18:46.396    19:17:17 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:46.396    19:17:17 sma.sma_plugins -- sma/plugins.sh@102 -- # jq -r .handle
00:18:46.396  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:46.396  I0000 00:00:1733509037.325237  588612 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:46.396  I0000 00:00:1733509037.327185  588612 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:46.655   19:17:17 sma.sma_plugins -- sma/plugins.sh@102 -- # [[ nvme:plugin2-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:18:46.655    19:17:17 sma.sma_plugins -- sma/plugins.sh@103 -- # create_device nvmf_tcp
00:18:46.655    19:17:17 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:46.655    19:17:17 sma.sma_plugins -- sma/plugins.sh@103 -- # jq -r .handle
00:18:46.655  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:46.655  I0000 00:00:1733509037.587023  588641 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:46.655  I0000 00:00:1733509037.589055  588641 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:46.912   19:17:17 sma.sma_plugins -- sma/plugins.sh@103 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:18:46.912   19:17:17 sma.sma_plugins -- sma/plugins.sh@105 -- # killprocess 588451
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 588451 ']'
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 588451
00:18:46.912    19:17:17 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:46.912    19:17:17 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588451
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588451'
00:18:46.912  killing process with pid 588451
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 588451
00:18:46.912   19:17:17 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 588451
00:18:46.912   19:17:17 sma.sma_plugins -- sma/plugins.sh@118 -- # smapid=588785
00:18:46.912   19:17:17 sma.sma_plugins -- sma/plugins.sh@119 -- # sma_waitforlisten
00:18:46.912   19:17:17 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:46.912   19:17:17 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:46.912   19:17:17 sma.sma_plugins -- sma/plugins.sh@108 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:46.912   19:17:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:46.912    19:17:17 sma.sma_plugins -- sma/plugins.sh@108 -- # cat
00:18:46.912   19:17:17 sma.sma_plugins -- sma/plugins.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:46.912   19:17:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:46.912   19:17:17 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:46.912   19:17:17 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:47.170  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:47.170  I0000 00:00:1733509037.930699  588785 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:48.104   19:17:18 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:48.104   19:17:18 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:48.104   19:17:18 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:48.104   19:17:18 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:48.104    19:17:18 sma.sma_plugins -- sma/plugins.sh@121 -- # create_device nvme
00:18:48.104    19:17:18 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:48.104    19:17:18 sma.sma_plugins -- sma/plugins.sh@121 -- # jq -r .handle
00:18:48.104  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:48.104  I0000 00:00:1733509038.983887  588873 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:48.104  I0000 00:00:1733509038.985701  588873 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:48.104   19:17:19 sma.sma_plugins -- sma/plugins.sh@121 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:18:48.104    19:17:19 sma.sma_plugins -- sma/plugins.sh@122 -- # create_device nvmf_tcp
00:18:48.104    19:17:19 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:48.104    19:17:19 sma.sma_plugins -- sma/plugins.sh@122 -- # jq -r .handle
00:18:48.363  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:48.363  I0000 00:00:1733509039.250889  588985 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:48.363  I0000 00:00:1733509039.252916  588985 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:48.363   19:17:19 sma.sma_plugins -- sma/plugins.sh@122 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:18:48.363   19:17:19 sma.sma_plugins -- sma/plugins.sh@124 -- # killprocess 588785
00:18:48.363   19:17:19 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 588785 ']'
00:18:48.363   19:17:19 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 588785
00:18:48.363    19:17:19 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:48.363   19:17:19 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:48.363    19:17:19 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588785
00:18:48.363   19:17:19 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:48.363   19:17:19 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:48.363   19:17:19 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588785'
00:18:48.363  killing process with pid 588785
00:18:48.622   19:17:19 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 588785
00:18:48.622   19:17:19 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 588785
00:18:48.622   19:17:19 sma.sma_plugins -- sma/plugins.sh@134 -- # smapid=589015
00:18:48.622   19:17:19 sma.sma_plugins -- sma/plugins.sh@135 -- # sma_waitforlisten
00:18:48.622   19:17:19 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:48.622   19:17:19 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:48.622   19:17:19 sma.sma_plugins -- sma/plugins.sh@127 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:48.622   19:17:19 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:48.622   19:17:19 sma.sma_plugins -- sma/plugins.sh@127 -- # SMA_PLUGINS=plugin1:plugin2
00:18:48.622   19:17:19 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:48.622   19:17:19 sma.sma_plugins -- sma/plugins.sh@127 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:48.622   19:17:19 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:48.622    19:17:19 sma.sma_plugins -- sma/plugins.sh@127 -- # cat
00:18:48.622   19:17:19 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:48.906  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:48.906  I0000 00:00:1733509039.610597  589015 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:49.519   19:17:20 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:49.519   19:17:20 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:49.519   19:17:20 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:49.519   19:17:20 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:49.519    19:17:20 sma.sma_plugins -- sma/plugins.sh@137 -- # create_device nvme
00:18:49.519    19:17:20 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:49.519    19:17:20 sma.sma_plugins -- sma/plugins.sh@137 -- # jq -r .handle
00:18:49.777  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:49.777  I0000 00:00:1733509040.657472  589183 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:49.777  I0000 00:00:1733509040.659330  589183 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:49.777   19:17:20 sma.sma_plugins -- sma/plugins.sh@137 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:18:49.777    19:17:20 sma.sma_plugins -- sma/plugins.sh@138 -- # create_device nvmf_tcp
00:18:49.777    19:17:20 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:49.777    19:17:20 sma.sma_plugins -- sma/plugins.sh@138 -- # jq -r .handle
00:18:50.036  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:50.036  I0000 00:00:1733509040.928560  589211 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:50.036  I0000 00:00:1733509040.930481  589211 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:50.036   19:17:20 sma.sma_plugins -- sma/plugins.sh@138 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:18:50.036   19:17:20 sma.sma_plugins -- sma/plugins.sh@140 -- # killprocess 589015
00:18:50.036   19:17:20 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 589015 ']'
00:18:50.036   19:17:20 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 589015
00:18:50.036    19:17:20 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:50.036   19:17:20 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:50.036    19:17:20 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589015
00:18:50.295   19:17:20 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:50.295   19:17:20 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:50.295   19:17:20 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589015'
00:18:50.295  killing process with pid 589015
00:18:50.295   19:17:20 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 589015
00:18:50.295   19:17:20 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 589015
00:18:50.295   19:17:21 sma.sma_plugins -- sma/plugins.sh@152 -- # smapid=589309
00:18:50.295   19:17:21 sma.sma_plugins -- sma/plugins.sh@153 -- # sma_waitforlisten
00:18:50.295   19:17:21 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:50.295   19:17:21 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:50.295   19:17:21 sma.sma_plugins -- sma/plugins.sh@143 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:50.295   19:17:21 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:50.295   19:17:21 sma.sma_plugins -- sma/plugins.sh@143 -- # SMA_PLUGINS=plugin1
00:18:50.295   19:17:21 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:50.295    19:17:21 sma.sma_plugins -- sma/plugins.sh@143 -- # cat
00:18:50.295   19:17:21 sma.sma_plugins -- sma/plugins.sh@143 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:50.295   19:17:21 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:50.295   19:17:21 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:50.553  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:50.553  I0000 00:00:1733509041.287344  589309 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:51.119   19:17:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:51.119   19:17:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:51.119   19:17:22 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:51.377   19:17:22 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:51.377    19:17:22 sma.sma_plugins -- sma/plugins.sh@155 -- # create_device nvme
00:18:51.377    19:17:22 sma.sma_plugins -- sma/plugins.sh@155 -- # jq -r .handle
00:18:51.377    19:17:22 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:51.377  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:51.377  I0000 00:00:1733509042.326686  589413 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:51.636  I0000 00:00:1733509042.328513  589413 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:51.636   19:17:22 sma.sma_plugins -- sma/plugins.sh@155 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:18:51.636    19:17:22 sma.sma_plugins -- sma/plugins.sh@156 -- # create_device nvmf_tcp
00:18:51.636    19:17:22 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:51.636    19:17:22 sma.sma_plugins -- sma/plugins.sh@156 -- # jq -r .handle
00:18:51.894  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:51.894  I0000 00:00:1733509042.594407  589553 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:51.894  I0000 00:00:1733509042.596344  589553 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@156 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@158 -- # killprocess 589309
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 589309 ']'
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 589309
00:18:51.894    19:17:22 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:51.894    19:17:22 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589309
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589309'
00:18:51.894  killing process with pid 589309
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 589309
00:18:51.894   19:17:22 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 589309
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@161 -- # crypto_engines=(crypto-plugin1 crypto-plugin2)
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=589584
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:18:51.894   19:17:22 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:51.894   19:17:22 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:51.894   19:17:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:51.894   19:17:22 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:51.894   19:17:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:51.894   19:17:22 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:51.894    19:17:22 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:18:51.894   19:17:22 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:52.153  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:52.153  I0000 00:00:1733509042.952222  589584 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:53.086   19:17:23 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:53.086   19:17:23 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:53.086   19:17:23 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:53.086   19:17:23 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:53.086    19:17:23 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:18:53.086    19:17:23 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:53.086    19:17:23 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:18:53.086  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:53.086  I0000 00:00:1733509043.986566  589749 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:53.086  I0000 00:00:1733509043.988656  589749 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:53.086   19:17:24 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin1 == nvme:plugin1-device1:crypto-plugin1 ]]
00:18:53.086    19:17:24 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:18:53.086    19:17:24 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:18:53.086    19:17:24 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:53.652  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:53.652  I0000 00:00:1733509044.296565  589782 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:53.652  I0000 00:00:1733509044.298341  589782 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:53.652   19:17:24 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin1 == nvmf_tcp:plugin2-device2:crypto-plugin1 ]]
00:18:53.652   19:17:24 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 589584
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 589584 ']'
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 589584
00:18:53.652    19:17:24 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:53.652    19:17:24 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589584
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589584'
00:18:53.652  killing process with pid 589584
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 589584
00:18:53.652   19:17:24 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 589584
00:18:53.652   19:17:24 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:18:53.652   19:17:24 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=589868
00:18:53.652   19:17:24 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:18:53.652   19:17:24 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:53.652   19:17:24 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:18:53.652   19:17:24 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:18:53.652    19:17:24 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:18:53.652   19:17:24 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:53.652   19:17:24 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:18:53.652   19:17:24 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:53.652   19:17:24 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:53.652   19:17:24 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:18:53.910  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:53.910  I0000 00:00:1733509044.645219  589868 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:54.476   19:17:25 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:18:54.476   19:17:25 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:18:54.476   19:17:25 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:54.732   19:17:25 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:18:54.732    19:17:25 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:18:54.732    19:17:25 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:54.732    19:17:25 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:18:54.990  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:54.990  I0000 00:00:1733509045.692021  589974 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:54.990  I0000 00:00:1733509045.693890  589974 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:54.990   19:17:25 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin2 == nvme:plugin1-device1:crypto-plugin2 ]]
00:18:54.990    19:17:25 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:18:54.990    19:17:25 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:18:54.990    19:17:25 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:55.248  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:55.248  I0000 00:00:1733509045.948371  590116 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:55.248  I0000 00:00:1733509045.950354  590116 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:55.248   19:17:25 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin2 == nvmf_tcp:plugin2-device2:crypto-plugin2 ]]
00:18:55.248   19:17:25 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 589868
00:18:55.248   19:17:25 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 589868 ']'
00:18:55.248   19:17:25 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 589868
00:18:55.248    19:17:25 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:55.248   19:17:25 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:55.248    19:17:25 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589868
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589868'
00:18:55.248  killing process with pid 589868
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 589868
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 589868
00:18:55.248   19:17:26 sma.sma_plugins -- sma/plugins.sh@184 -- # cleanup
00:18:55.248   19:17:26 sma.sma_plugins -- sma/plugins.sh@13 -- # killprocess 587394
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 587394 ']'
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 587394
00:18:55.248    19:17:26 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:55.248    19:17:26 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587394
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587394'
00:18:55.248  killing process with pid 587394
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 587394
00:18:55.248   19:17:26 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 587394
00:18:57.775   19:17:28 sma.sma_plugins -- sma/plugins.sh@14 -- # killprocess 589868
00:18:57.775   19:17:28 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 589868 ']'
00:18:57.775   19:17:28 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 589868
00:18:57.775  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (589868) - No such process
00:18:57.775   19:17:28 sma.sma_plugins -- common/autotest_common.sh@981 -- # echo 'Process with pid 589868 is not found'
00:18:57.775  Process with pid 589868 is not found
00:18:57.775   19:17:28 sma.sma_plugins -- sma/plugins.sh@185 -- # trap - SIGINT SIGTERM EXIT
00:18:57.775  
00:18:57.775  real	0m18.799s
00:18:57.775  user	0m25.757s
00:18:57.775  sys	0m2.103s
00:18:57.775   19:17:28 sma.sma_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:57.775   19:17:28 sma.sma_plugins -- common/autotest_common.sh@10 -- # set +x
00:18:57.775  ************************************
00:18:57.775  END TEST sma_plugins
00:18:57.775  ************************************
00:18:57.775   19:17:28 sma -- sma/sma.sh@14 -- # run_test sma_discovery /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:18:57.775   19:17:28 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:18:57.775   19:17:28 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:57.775   19:17:28 sma -- common/autotest_common.sh@10 -- # set +x
00:18:57.775  ************************************
00:18:57.775  START TEST sma_discovery
00:18:57.775  ************************************
00:18:57.775   19:17:28 sma.sma_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:18:57.775  * Looking for test storage...
00:18:57.775  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:18:57.775    19:17:28 sma.sma_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:57.775     19:17:28 sma.sma_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:18:57.775     19:17:28 sma.sma_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:57.775    19:17:28 sma.sma_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@344 -- # case "$op" in
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@345 -- # : 1
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@365 -- # decimal 1
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@353 -- # local d=1
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@355 -- # echo 1
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@366 -- # decimal 2
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@353 -- # local d=2
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:57.775     19:17:28 sma.sma_discovery -- scripts/common.sh@355 -- # echo 2
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:57.775    19:17:28 sma.sma_discovery -- scripts/common.sh@368 -- # return 0
00:18:57.775    19:17:28 sma.sma_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:57.775    19:17:28 sma.sma_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:57.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.775  		--rc genhtml_branch_coverage=1
00:18:57.775  		--rc genhtml_function_coverage=1
00:18:57.775  		--rc genhtml_legend=1
00:18:57.775  		--rc geninfo_all_blocks=1
00:18:57.775  		--rc geninfo_unexecuted_blocks=1
00:18:57.775  		
00:18:57.775  		'
00:18:57.776    19:17:28 sma.sma_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:57.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.776  		--rc genhtml_branch_coverage=1
00:18:57.776  		--rc genhtml_function_coverage=1
00:18:57.776  		--rc genhtml_legend=1
00:18:57.776  		--rc geninfo_all_blocks=1
00:18:57.776  		--rc geninfo_unexecuted_blocks=1
00:18:57.776  		
00:18:57.776  		'
00:18:57.776    19:17:28 sma.sma_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:57.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.776  		--rc genhtml_branch_coverage=1
00:18:57.776  		--rc genhtml_function_coverage=1
00:18:57.776  		--rc genhtml_legend=1
00:18:57.776  		--rc geninfo_all_blocks=1
00:18:57.776  		--rc geninfo_unexecuted_blocks=1
00:18:57.776  		
00:18:57.776  		'
00:18:57.776    19:17:28 sma.sma_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:57.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.776  		--rc genhtml_branch_coverage=1
00:18:57.776  		--rc genhtml_function_coverage=1
00:18:57.776  		--rc genhtml_legend=1
00:18:57.776  		--rc geninfo_all_blocks=1
00:18:57.776  		--rc geninfo_unexecuted_blocks=1
00:18:57.776  		
00:18:57.776  		'
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@12 -- # sma_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@15 -- # t1sock=/var/tmp/spdk.sock1
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@16 -- # t2sock=/var/tmp/spdk.sock2
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@17 -- # invalid_port=8008
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@18 -- # t1dscport=8009
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@19 -- # t2dscport1=8010
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@20 -- # t2dscport2=8011
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@21 -- # t1nqn=nqn.2016-06.io.spdk:node1
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@22 -- # t2nqn=nqn.2016-06.io.spdk:node2
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@24 -- # cleanup_period=1
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@132 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@136 -- # t1pid=590480
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock1 -m 0x1
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@138 -- # t2pid=590481
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@137 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@142 -- # tgtpid=590482
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@141 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x4
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@153 -- # smapid=590483
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@155 -- # waitforlisten 590482
00:18:57.776   19:17:28 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 590482 ']'
00:18:57.776   19:17:28 sma.sma_discovery -- sma/discovery.sh@145 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:57.776   19:17:28 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:57.776    19:17:28 sma.sma_discovery -- sma/discovery.sh@145 -- # cat
00:18:57.776   19:17:28 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:57.776   19:17:28 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:57.776  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:57.776   19:17:28 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:57.776   19:17:28 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:18:57.776  [2024-12-06 19:17:28.428656] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:18:57.776  [2024-12-06 19:17:28.428659] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:18:57.776  [2024-12-06 19:17:28.428809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal[2024-12-06 19:17:28.428809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590480 ]
00:18:57.776  file-prefix=spdk_pid590481 ]
00:18:57.776  [2024-12-06 19:17:28.428885] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:18:57.776  [2024-12-06 19:17:28.429010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590482 ]
00:18:57.776  EAL: No free 2048 kB hugepages reported on node 1
00:18:57.776  EAL: No free 2048 kB hugepages reported on node 1
00:18:57.776  EAL: No free 2048 kB hugepages reported on node 1
00:18:57.776  [2024-12-06 19:17:28.588247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:57.776  [2024-12-06 19:17:28.588299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:57.776  [2024-12-06 19:17:28.588499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:57.776  [2024-12-06 19:17:28.715047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:58.033  [2024-12-06 19:17:28.731110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:58.033  [2024-12-06 19:17:28.746065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:58.966  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:58.966  I0000 00:00:1733509049.602919  590483 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:18:58.966   19:17:29 sma.sma_discovery -- sma/discovery.sh@156 -- # waitforlisten 590480 /var/tmp/spdk.sock1
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 590480 ']'
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock1
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...'
00:18:58.966  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:18:58.966  [2024-12-06 19:17:29.617362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:18:58.966   19:17:29 sma.sma_discovery -- sma/discovery.sh@157 -- # waitforlisten 590481 /var/tmp/spdk.sock2
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 590481 ']'
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:18:58.966  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:58.966   19:17:29 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:18:59.533   19:17:30 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:59.533   19:17:30 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:18:59.533    19:17:30 sma.sma_discovery -- sma/discovery.sh@162 -- # uuidgen
00:18:59.533   19:17:30 sma.sma_discovery -- sma/discovery.sh@162 -- # t1uuid=2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:18:59.533    19:17:30 sma.sma_discovery -- sma/discovery.sh@163 -- # uuidgen
00:18:59.533   19:17:30 sma.sma_discovery -- sma/discovery.sh@163 -- # t2uuid=9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:18:59.533    19:17:30 sma.sma_discovery -- sma/discovery.sh@164 -- # uuidgen
00:18:59.533   19:17:30 sma.sma_discovery -- sma/discovery.sh@164 -- # t2uuid2=29d53783-24d5-4067-a97a-b7b6df495f81
00:18:59.533   19:17:30 sma.sma_discovery -- sma/discovery.sh@166 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1
00:18:59.533  [2024-12-06 19:17:30.451301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:59.791  [2024-12-06 19:17:30.491845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:18:59.791  [2024-12-06 19:17:30.499598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:18:59.791  null0
00:18:59.791   19:17:30 sma.sma_discovery -- sma/discovery.sh@176 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:19:00.049  [2024-12-06 19:17:30.773959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:19:00.049  [2024-12-06 19:17:30.830512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:19:00.049  [2024-12-06 19:17:30.838307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8010 ***
00:19:00.049  [2024-12-06 19:17:30.846361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8011 ***
00:19:00.049  null0
00:19:00.049  null1
00:19:00.049   19:17:30 sma.sma_discovery -- sma/discovery.sh@190 -- # sma_waitforlisten
00:19:00.049   19:17:30 sma.sma_discovery -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:19:00.049   19:17:30 sma.sma_discovery -- sma/common.sh@8 -- # local sma_port=8080
00:19:00.049   19:17:30 sma.sma_discovery -- sma/common.sh@10 -- # (( i = 0 ))
00:19:00.049   19:17:30 sma.sma_discovery -- sma/common.sh@10 -- # (( i < 5 ))
00:19:00.049   19:17:30 sma.sma_discovery -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:19:00.049   19:17:30 sma.sma_discovery -- sma/common.sh@12 -- # return 0
00:19:00.049   19:17:30 sma.sma_discovery -- sma/discovery.sh@192 -- # localnqn=nqn.2016-06.io.spdk:local0
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@195 -- # create_device nqn.2016-06.io.spdk:local0
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@195 -- # jq -r .handle
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:19:00.049    19:17:30 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:00.313  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:00.313  I0000 00:00:1733509051.124547  590798 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:00.313  I0000 00:00:1733509051.126323  590798 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:00.313  [2024-12-06 19:17:31.146294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:19:00.313   19:17:31 sma.sma_discovery -- sma/discovery.sh@195 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:00.313   19:17:31 sma.sma_discovery -- sma/discovery.sh@198 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:00.571  [
00:19:00.571    {
00:19:00.571      "nqn": "nqn.2016-06.io.spdk:local0",
00:19:00.571      "subtype": "NVMe",
00:19:00.571      "listen_addresses": [
00:19:00.571        {
00:19:00.571          "trtype": "TCP",
00:19:00.571          "adrfam": "IPv4",
00:19:00.571          "traddr": "127.0.0.1",
00:19:00.571          "trsvcid": "4419"
00:19:00.571        }
00:19:00.571      ],
00:19:00.571      "allow_any_host": false,
00:19:00.571      "hosts": [],
00:19:00.571      "serial_number": "00000000000000000000",
00:19:00.571      "model_number": "SPDK bdev Controller",
00:19:00.571      "max_namespaces": 32,
00:19:00.571      "min_cntlid": 1,
00:19:00.571      "max_cntlid": 65519,
00:19:00.571      "namespaces": []
00:19:00.571    }
00:19:00.571  ]
00:19:00.571   19:17:31 sma.sma_discovery -- sma/discovery.sh@201 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 2561c86f-ebe6-4293-b80b-086d4bebbc7e 8009 8010
00:19:00.571   19:17:31 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:00.571   19:17:31 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:00.571   19:17:31 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:00.571    19:17:31 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 2561c86f-ebe6-4293-b80b-086d4bebbc7e 8009 8010
00:19:00.571    19:17:31 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:00.571    19:17:31 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:00.571    19:17:31 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:00.571     19:17:31 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:00.571     19:17:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:00.829  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:00.829  I0000 00:00:1733509051.738159  590946 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:00.829  I0000 00:00:1733509051.739845  590946 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:03.357  {}
00:19:03.357    19:17:34 sma.sma_discovery -- sma/discovery.sh@204 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:03.357    19:17:34 sma.sma_discovery -- sma/discovery.sh@204 -- # jq -r '. | length'
00:19:03.357   19:17:34 sma.sma_discovery -- sma/discovery.sh@204 -- # [[ 2 -eq 2 ]]
00:19:03.357   19:17:34 sma.sma_discovery -- sma/discovery.sh@206 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:03.357   19:17:34 sma.sma_discovery -- sma/discovery.sh@206 -- # jq -r '.[].trid.trsvcid'
00:19:03.357   19:17:34 sma.sma_discovery -- sma/discovery.sh@206 -- # grep 8009
00:19:03.923  8009
00:19:03.923   19:17:34 sma.sma_discovery -- sma/discovery.sh@207 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:03.923   19:17:34 sma.sma_discovery -- sma/discovery.sh@207 -- # jq -r '.[].trid.trsvcid'
00:19:03.923   19:17:34 sma.sma_discovery -- sma/discovery.sh@207 -- # grep 8010
00:19:03.923  8010
00:19:03.924    19:17:34 sma.sma_discovery -- sma/discovery.sh@210 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:03.924    19:17:34 sma.sma_discovery -- sma/discovery.sh@210 -- # jq -r '.[].namespaces | length'
00:19:04.182   19:17:35 sma.sma_discovery -- sma/discovery.sh@210 -- # [[ 1 -eq 1 ]]
00:19:04.182    19:17:35 sma.sma_discovery -- sma/discovery.sh@211 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:04.182    19:17:35 sma.sma_discovery -- sma/discovery.sh@211 -- # jq -r '.[].namespaces[0].uuid'
00:19:04.440   19:17:35 sma.sma_discovery -- sma/discovery.sh@211 -- # [[ 2561c86f-ebe6-4293-b80b-086d4bebbc7e == \2\5\6\1\c\8\6\f\-\e\b\e\6\-\4\2\9\3\-\b\8\0\b\-\0\8\6\d\4\b\e\b\b\c\7\e ]]
00:19:04.440   19:17:35 sma.sma_discovery -- sma/discovery.sh@214 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8010
00:19:04.440   19:17:35 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:04.440   19:17:35 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:04.440   19:17:35 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:04.440    19:17:35 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8010
00:19:04.440    19:17:35 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:04.440    19:17:35 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:04.440    19:17:35 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:04.440     19:17:35 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:04.440     19:17:35 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:04.698     19:17:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:04.956  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:04.956  I0000 00:00:1733509055.664654  591405 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:04.956  I0000 00:00:1733509055.666461  591405 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:04.956  {}
00:19:04.956    19:17:35 sma.sma_discovery -- sma/discovery.sh@217 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:04.956    19:17:35 sma.sma_discovery -- sma/discovery.sh@217 -- # jq -r '. | length'
00:19:05.215   19:17:35 sma.sma_discovery -- sma/discovery.sh@217 -- # [[ 2 -eq 2 ]]
00:19:05.215    19:17:35 sma.sma_discovery -- sma/discovery.sh@218 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:05.215    19:17:35 sma.sma_discovery -- sma/discovery.sh@218 -- # jq -r '.[].namespaces | length'
00:19:05.474   19:17:36 sma.sma_discovery -- sma/discovery.sh@218 -- # [[ 2 -eq 2 ]]
00:19:05.474   19:17:36 sma.sma_discovery -- sma/discovery.sh@219 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:05.474   19:17:36 sma.sma_discovery -- sma/discovery.sh@219 -- # jq -r '.[].namespaces[].uuid'
00:19:05.474   19:17:36 sma.sma_discovery -- sma/discovery.sh@219 -- # grep 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:05.733  2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:05.733   19:17:36 sma.sma_discovery -- sma/discovery.sh@220 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:05.733   19:17:36 sma.sma_discovery -- sma/discovery.sh@220 -- # jq -r '.[].namespaces[].uuid'
00:19:05.733   19:17:36 sma.sma_discovery -- sma/discovery.sh@220 -- # grep 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:05.991  9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:05.991   19:17:36 sma.sma_discovery -- sma/discovery.sh@223 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:05.991   19:17:36 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:05.991    19:17:36 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:05.991    19:17:36 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:06.250  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:06.250  I0000 00:00:1733509057.109076  591698 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:06.250  I0000 00:00:1733509057.110791  591698 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:06.250  {}
00:19:06.250    19:17:37 sma.sma_discovery -- sma/discovery.sh@227 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:06.250    19:17:37 sma.sma_discovery -- sma/discovery.sh@227 -- # jq -r '. | length'
00:19:06.507   19:17:37 sma.sma_discovery -- sma/discovery.sh@227 -- # [[ 1 -eq 1 ]]
00:19:06.507   19:17:37 sma.sma_discovery -- sma/discovery.sh@228 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:06.507   19:17:37 sma.sma_discovery -- sma/discovery.sh@228 -- # jq -r '.[].trid.trsvcid'
00:19:06.507   19:17:37 sma.sma_discovery -- sma/discovery.sh@228 -- # grep 8010
00:19:06.765  8010
00:19:06.765    19:17:37 sma.sma_discovery -- sma/discovery.sh@230 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:06.765    19:17:37 sma.sma_discovery -- sma/discovery.sh@230 -- # jq -r '.[].namespaces | length'
00:19:07.332   19:17:37 sma.sma_discovery -- sma/discovery.sh@230 -- # [[ 1 -eq 1 ]]
00:19:07.332    19:17:37 sma.sma_discovery -- sma/discovery.sh@231 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:07.332    19:17:37 sma.sma_discovery -- sma/discovery.sh@231 -- # jq -r '.[].namespaces[0].uuid'
00:19:07.332   19:17:38 sma.sma_discovery -- sma/discovery.sh@231 -- # [[ 9efb2999-4008-4dcf-92c9-906cc8a2a1ad == \9\e\f\b\2\9\9\9\-\4\0\0\8\-\4\d\c\f\-\9\2\c\9\-\9\0\6\c\c\8\a\2\a\1\a\d ]]
00:19:07.332   19:17:38 sma.sma_discovery -- sma/discovery.sh@234 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:07.332   19:17:38 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:07.332    19:17:38 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:07.332    19:17:38 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:07.898  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:07.899  I0000 00:00:1733509058.555342  591873 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:07.899  I0000 00:00:1733509058.557169  591873 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:07.899  {}
00:19:07.899    19:17:38 sma.sma_discovery -- sma/discovery.sh@237 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:07.899    19:17:38 sma.sma_discovery -- sma/discovery.sh@237 -- # jq -r '. | length'
00:19:08.157   19:17:38 sma.sma_discovery -- sma/discovery.sh@237 -- # [[ 0 -eq 0 ]]
00:19:08.157    19:17:38 sma.sma_discovery -- sma/discovery.sh@238 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:08.157    19:17:38 sma.sma_discovery -- sma/discovery.sh@238 -- # jq -r '.[].namespaces | length'
00:19:08.415   19:17:39 sma.sma_discovery -- sma/discovery.sh@238 -- # [[ 0 -eq 0 ]]
00:19:08.415    19:17:39 sma.sma_discovery -- sma/discovery.sh@241 -- # uuidgen
00:19:08.415   19:17:39 sma.sma_discovery -- sma/discovery.sh@241 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 e6e74cb4-aa09-4bf5-a174-2339439fd844 8009
00:19:08.415   19:17:39 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:19:08.415   19:17:39 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 e6e74cb4-aa09-4bf5-a174-2339439fd844 8009
00:19:08.415   19:17:39 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:19:08.415   19:17:39 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:08.415    19:17:39 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:19:08.415   19:17:39 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:08.415   19:17:39 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 e6e74cb4-aa09-4bf5-a174-2339439fd844 8009
00:19:08.415   19:17:39 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:08.415   19:17:39 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:08.415   19:17:39 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:08.415    19:17:39 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume e6e74cb4-aa09-4bf5-a174-2339439fd844 8009
00:19:08.415    19:17:39 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:08.415    19:17:39 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:08.415    19:17:39 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:08.415     19:17:39 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:08.415     19:17:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:08.675  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:08.675  I0000 00:00:1733509059.444343  592040 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:08.675  I0000 00:00:1733509059.446187  592040 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:09.608  [2024-12-06 19:17:40.549976] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:09.866  [2024-12-06 19:17:40.650223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:09.866  [2024-12-06 19:17:40.750467] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.123  [2024-12-06 19:17:40.850713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.123  [2024-12-06 19:17:40.950960] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.123  [2024-12-06 19:17:41.051207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.381  [2024-12-06 19:17:41.151455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.381  [2024-12-06 19:17:41.251702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.638  [2024-12-06 19:17:41.351948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.638  [2024-12-06 19:17:41.452195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.638  [2024-12-06 19:17:41.552450] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6e74cb4-aa09-4bf5-a174-2339439fd844
00:19:10.638  [2024-12-06 19:17:41.552482] bdev.c:8801:_bdev_open_async: *ERROR*: Timed out while waiting for bdev 'e6e74cb4-aa09-4bf5-a174-2339439fd844' to appear
00:19:10.638  Traceback (most recent call last):
00:19:10.638    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:19:10.638      main(sys.argv[1:])
00:19:10.638    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:19:10.638      result = client.call(request['method'], request.get('params', {}))
00:19:10.638               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:10.638    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:19:10.638      response = func(request=json_format.ParseDict(params, input()))
00:19:10.638                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:10.638    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:19:10.638      return _end_unary_response_blocking(state, call, False, None)
00:19:10.638             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:10.638    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:19:10.638      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:19:10.638      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:10.638  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:19:10.638  	status = StatusCode.NOT_FOUND
00:19:10.639  	details = "Volume could not be found"
00:19:10.639  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-06T19:17:41.570019202+01:00", grpc_status:5, grpc_message:"Volume could not be found"}"
00:19:10.639  >
00:19:10.897   19:17:41 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:19:10.897   19:17:41 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:10.897   19:17:41 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:10.897   19:17:41 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:10.897    19:17:41 sma.sma_discovery -- sma/discovery.sh@242 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:10.897    19:17:41 sma.sma_discovery -- sma/discovery.sh@242 -- # jq -r '. | length'
00:19:11.155   19:17:41 sma.sma_discovery -- sma/discovery.sh@242 -- # [[ 0 -eq 0 ]]
00:19:11.155    19:17:41 sma.sma_discovery -- sma/discovery.sh@243 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:11.155    19:17:41 sma.sma_discovery -- sma/discovery.sh@243 -- # jq -r '.[].namespaces | length'
00:19:11.413   19:17:42 sma.sma_discovery -- sma/discovery.sh@243 -- # [[ 0 -eq 0 ]]
00:19:11.413   19:17:42 sma.sma_discovery -- sma/discovery.sh@246 -- # volumes=($t1uuid $t2uuid)
00:19:11.413   19:17:42 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:19:11.413   19:17:42 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 2561c86f-ebe6-4293-b80b-086d4bebbc7e 8009 8010
00:19:11.413   19:17:42 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:11.413   19:17:42 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:11.413   19:17:42 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:11.413    19:17:42 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 2561c86f-ebe6-4293-b80b-086d4bebbc7e 8009 8010
00:19:11.413    19:17:42 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:11.413    19:17:42 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:11.413    19:17:42 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:11.413     19:17:42 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:11.413     19:17:42 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:11.672  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:11.673  I0000 00:00:1733509062.447721  592349 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:11.673  I0000 00:00:1733509062.449544  592349 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:14.202  {}
00:19:14.202   19:17:44 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:19:14.202   19:17:44 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8009 8010
00:19:14.202   19:17:44 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:14.202   19:17:44 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:14.202   19:17:44 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:14.202    19:17:44 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8009 8010
00:19:14.202    19:17:44 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:14.202    19:17:44 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:14.202    19:17:44 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:14.202     19:17:44 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:14.202     19:17:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:14.202  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:14.202  I0000 00:00:1733509065.074628  592741 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:14.202  I0000 00:00:1733509065.076807  592741 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:14.202  {}
00:19:14.202    19:17:45 sma.sma_discovery -- sma/discovery.sh@251 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:14.202    19:17:45 sma.sma_discovery -- sma/discovery.sh@251 -- # jq -r '. | length'
00:19:14.767   19:17:45 sma.sma_discovery -- sma/discovery.sh@251 -- # [[ 2 -eq 2 ]]
00:19:14.767   19:17:45 sma.sma_discovery -- sma/discovery.sh@252 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:14.767   19:17:45 sma.sma_discovery -- sma/discovery.sh@252 -- # jq -r '.[].trid.trsvcid'
00:19:14.767   19:17:45 sma.sma_discovery -- sma/discovery.sh@252 -- # grep 8009
00:19:14.767  8009
00:19:14.767   19:17:45 sma.sma_discovery -- sma/discovery.sh@253 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:14.767   19:17:45 sma.sma_discovery -- sma/discovery.sh@253 -- # jq -r '.[].trid.trsvcid'
00:19:14.767   19:17:45 sma.sma_discovery -- sma/discovery.sh@253 -- # grep 8010
00:19:15.332  8010
00:19:15.332   19:17:45 sma.sma_discovery -- sma/discovery.sh@254 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:15.332   19:17:45 sma.sma_discovery -- sma/discovery.sh@254 -- # jq -r '.[].namespaces[].uuid'
00:19:15.332   19:17:45 sma.sma_discovery -- sma/discovery.sh@254 -- # grep 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:15.332  2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:15.332   19:17:46 sma.sma_discovery -- sma/discovery.sh@255 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:15.332   19:17:46 sma.sma_discovery -- sma/discovery.sh@255 -- # jq -r '.[].namespaces[].uuid'
00:19:15.332   19:17:46 sma.sma_discovery -- sma/discovery.sh@255 -- # grep 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:15.590  9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:15.590   19:17:46 sma.sma_discovery -- sma/discovery.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:15.590   19:17:46 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:15.590    19:17:46 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:15.590    19:17:46 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:15.848  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:15.848  I0000 00:00:1733509066.781747  592957 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:15.848  I0000 00:00:1733509066.783512  592957 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:16.105  {}
00:19:16.105    19:17:46 sma.sma_discovery -- sma/discovery.sh@260 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:16.105    19:17:46 sma.sma_discovery -- sma/discovery.sh@260 -- # jq -r '. | length'
00:19:16.364   19:17:47 sma.sma_discovery -- sma/discovery.sh@260 -- # [[ 2 -eq 2 ]]
00:19:16.364   19:17:47 sma.sma_discovery -- sma/discovery.sh@261 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:16.364   19:17:47 sma.sma_discovery -- sma/discovery.sh@261 -- # jq -r '.[].trid.trsvcid'
00:19:16.364   19:17:47 sma.sma_discovery -- sma/discovery.sh@261 -- # grep 8009
00:19:16.621  8009
00:19:16.621   19:17:47 sma.sma_discovery -- sma/discovery.sh@262 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:16.621   19:17:47 sma.sma_discovery -- sma/discovery.sh@262 -- # jq -r '.[].trid.trsvcid'
00:19:16.621   19:17:47 sma.sma_discovery -- sma/discovery.sh@262 -- # grep 8010
00:19:16.878  8010
00:19:16.878   19:17:47 sma.sma_discovery -- sma/discovery.sh@265 -- # NOT delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:16.878   19:17:47 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:19:16.878   19:17:47 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:16.878   19:17:47 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=delete_device
00:19:16.878   19:17:47 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:16.878    19:17:47 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t delete_device
00:19:16.878   19:17:47 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:16.878   19:17:47 sma.sma_discovery -- common/autotest_common.sh@655 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:16.878   19:17:47 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:17.137  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:17.137  I0000 00:00:1733509067.881472  593123 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:17.137  I0000 00:00:1733509067.883348  593123 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:17.137  Traceback (most recent call last):
00:19:17.137    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:19:17.137      main(sys.argv[1:])
00:19:17.137    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:19:17.137      result = client.call(request['method'], request.get('params', {}))
00:19:17.137               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:17.137    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:19:17.137      response = func(request=json_format.ParseDict(params, input()))
00:19:17.137                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:17.137    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:19:17.137      return _end_unary_response_blocking(state, call, False, None)
00:19:17.137             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:17.137    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:19:17.137      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:19:17.137      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:17.137  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:19:17.137  	status = StatusCode.FAILED_PRECONDITION
00:19:17.137  	details = "Device has attached volumes"
00:19:17.137  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-06T19:17:47.885638443+01:00", grpc_status:9, grpc_message:"Device has attached volumes"}"
00:19:17.137  >
00:19:17.137   19:17:47 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:19:17.137   19:17:47 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:17.137   19:17:47 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:17.137   19:17:47 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:17.137    19:17:47 sma.sma_discovery -- sma/discovery.sh@267 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:17.137    19:17:47 sma.sma_discovery -- sma/discovery.sh@267 -- # jq -r '. | length'
00:19:17.394   19:17:48 sma.sma_discovery -- sma/discovery.sh@267 -- # [[ 2 -eq 2 ]]
00:19:17.394   19:17:48 sma.sma_discovery -- sma/discovery.sh@268 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:17.394   19:17:48 sma.sma_discovery -- sma/discovery.sh@268 -- # jq -r '.[].trid.trsvcid'
00:19:17.394   19:17:48 sma.sma_discovery -- sma/discovery.sh@268 -- # grep 8009
00:19:17.652  8009
00:19:17.652   19:17:48 sma.sma_discovery -- sma/discovery.sh@269 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:17.652   19:17:48 sma.sma_discovery -- sma/discovery.sh@269 -- # jq -r '.[].trid.trsvcid'
00:19:17.652   19:17:48 sma.sma_discovery -- sma/discovery.sh@269 -- # grep 8010
00:19:17.910  8010
00:19:17.910   19:17:48 sma.sma_discovery -- sma/discovery.sh@272 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:17.910   19:17:48 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:17.910    19:17:48 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:17.910    19:17:48 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:18.168  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:18.168  I0000 00:00:1733509069.000825  593288 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:18.168  I0000 00:00:1733509069.002601  593288 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:18.168  {}
00:19:18.168   19:17:49 sma.sma_discovery -- sma/discovery.sh@273 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:18.168   19:17:49 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:18.425  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:18.425  I0000 00:00:1733509069.322102  593316 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:18.425  I0000 00:00:1733509069.323866  593316 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:18.425  {}
00:19:18.426    19:17:49 sma.sma_discovery -- sma/discovery.sh@275 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:18.426    19:17:49 sma.sma_discovery -- sma/discovery.sh@275 -- # jq -r '. | length'
00:19:18.683   19:17:49 sma.sma_discovery -- sma/discovery.sh@275 -- # [[ 0 -eq 0 ]]
00:19:18.683   19:17:49 sma.sma_discovery -- sma/discovery.sh@276 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:18.683   19:17:49 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:19:18.683   19:17:49 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:18.683   19:17:49 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:19:18.942   19:17:49 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:18.942    19:17:49 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:19:18.942   19:17:49 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:18.942    19:17:49 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:19:18.942   19:17:49 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:18.942   19:17:49 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:19:18.942   19:17:49 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:19:18.942   19:17:49 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:18.942  [2024-12-06 19:17:49.881898] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:local0' does not exist
00:19:18.942  request:
00:19:18.942  {
00:19:18.942    "nqn": "nqn.2016-06.io.spdk:local0",
00:19:18.942    "method": "nvmf_get_subsystems",
00:19:18.942    "req_id": 1
00:19:18.942  }
00:19:18.942  Got JSON-RPC error response
00:19:18.942  response:
00:19:18.942  {
00:19:18.942    "code": -19,
00:19:18.942    "message": "No such device"
00:19:18.942  }
00:19:19.200   19:17:49 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:19:19.200   19:17:49 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:19.200   19:17:49 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:19.200   19:17:49 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@279 -- # create_device nqn.2016-06.io.spdk:local0 2561c86f-ebe6-4293-b80b-086d4bebbc7e 8009
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@279 -- # jq -r .handle
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n 2561c86f-ebe6-4293-b80b-086d4bebbc7e ]]
00:19:19.200     19:17:49 sma.sma_discovery -- sma/discovery.sh@75 -- # format_volume 2561c86f-ebe6-4293-b80b-086d4bebbc7e 8009
00:19:19.200     19:17:49 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:19.200     19:17:49 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:19.200     19:17:49 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:19.200      19:17:49 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:19.200      19:17:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@75 -- # volume='"volume": {
00:19:19.200  "volume_id": "JWHIb+vmQpO4CwhtS+u8fg==",
00:19:19.200  "nvmf": {
00:19:19.200  "hostnqn": "nqn.2016-06.io.spdk:host0",
00:19:19.200  "discovery": {
00:19:19.200  "discovery_endpoints": [
00:19:19.200  {
00:19:19.200  "trtype": "tcp",
00:19:19.200  "traddr": "127.0.0.1",
00:19:19.200  "trsvcid": "8009"
00:19:19.200  }
00:19:19.200  ]
00:19:19.200  }
00:19:19.200  }
00:19:19.200  },'
00:19:19.200    19:17:49 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:19.458  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:19.458  I0000 00:00:1733509070.183792  593481 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:19.458  I0000 00:00:1733509070.185581  593481 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:20.393  [2024-12-06 19:17:51.313547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:19:20.651   19:17:51 sma.sma_discovery -- sma/discovery.sh@279 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:20.651    19:17:51 sma.sma_discovery -- sma/discovery.sh@282 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:20.651    19:17:51 sma.sma_discovery -- sma/discovery.sh@282 -- # jq -r '. | length'
00:19:20.910   19:17:51 sma.sma_discovery -- sma/discovery.sh@282 -- # [[ 1 -eq 1 ]]
00:19:20.910   19:17:51 sma.sma_discovery -- sma/discovery.sh@283 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:20.910   19:17:51 sma.sma_discovery -- sma/discovery.sh@283 -- # jq -r '.[].trid.trsvcid'
00:19:20.910   19:17:51 sma.sma_discovery -- sma/discovery.sh@283 -- # grep 8009
00:19:21.168  8009
00:19:21.168    19:17:51 sma.sma_discovery -- sma/discovery.sh@284 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:21.168    19:17:51 sma.sma_discovery -- sma/discovery.sh@284 -- # jq -r '.[].namespaces | length'
00:19:21.427   19:17:52 sma.sma_discovery -- sma/discovery.sh@284 -- # [[ 1 -eq 1 ]]
00:19:21.427    19:17:52 sma.sma_discovery -- sma/discovery.sh@285 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:21.427    19:17:52 sma.sma_discovery -- sma/discovery.sh@285 -- # jq -r '.[].namespaces[0].uuid'
00:19:21.686   19:17:52 sma.sma_discovery -- sma/discovery.sh@285 -- # [[ 2561c86f-ebe6-4293-b80b-086d4bebbc7e == \2\5\6\1\c\8\6\f\-\e\b\e\6\-\4\2\9\3\-\b\8\0\b\-\0\8\6\d\4\b\e\b\b\c\7\e ]]
00:19:21.686   19:17:52 sma.sma_discovery -- sma/discovery.sh@288 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:21.686   19:17:52 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:21.686    19:17:52 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:21.686    19:17:52 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:21.944  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:21.944  I0000 00:00:1733509072.729431  593807 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:21.945  I0000 00:00:1733509072.731156  593807 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:21.945  {}
00:19:21.945    19:17:52 sma.sma_discovery -- sma/discovery.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:21.945    19:17:52 sma.sma_discovery -- sma/discovery.sh@290 -- # jq -r '. | length'
00:19:22.203   19:17:53 sma.sma_discovery -- sma/discovery.sh@290 -- # [[ 0 -eq 0 ]]
00:19:22.203    19:17:53 sma.sma_discovery -- sma/discovery.sh@291 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:22.203    19:17:53 sma.sma_discovery -- sma/discovery.sh@291 -- # jq -r '.[].namespaces | length'
00:19:22.461   19:17:53 sma.sma_discovery -- sma/discovery.sh@291 -- # [[ 0 -eq 0 ]]
00:19:22.461   19:17:53 sma.sma_discovery -- sma/discovery.sh@294 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8010 8011
00:19:22.461   19:17:53 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:22.461   19:17:53 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:22.461   19:17:53 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:22.461    19:17:53 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8010 8011
00:19:22.461    19:17:53 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:22.461    19:17:53 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:22.461    19:17:53 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:22.461     19:17:53 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 8011
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010' '8011')
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:22.461     19:17:53 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:19:22.720  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:22.720  I0000 00:00:1733509073.627726  593944 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:22.720  I0000 00:00:1733509073.629495  593944 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:24.093  {}
00:19:24.094    19:17:54 sma.sma_discovery -- sma/discovery.sh@297 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:24.094    19:17:54 sma.sma_discovery -- sma/discovery.sh@297 -- # jq -r '. | length'
00:19:24.351   19:17:55 sma.sma_discovery -- sma/discovery.sh@297 -- # [[ 1 -eq 1 ]]
00:19:24.351    19:17:55 sma.sma_discovery -- sma/discovery.sh@298 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:24.351    19:17:55 sma.sma_discovery -- sma/discovery.sh@298 -- # jq -r '.[].namespaces | length'
00:19:24.607   19:17:55 sma.sma_discovery -- sma/discovery.sh@298 -- # [[ 1 -eq 1 ]]
00:19:24.607    19:17:55 sma.sma_discovery -- sma/discovery.sh@299 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:24.607    19:17:55 sma.sma_discovery -- sma/discovery.sh@299 -- # jq -r '.[].namespaces[0].uuid'
00:19:24.865   19:17:55 sma.sma_discovery -- sma/discovery.sh@299 -- # [[ 9efb2999-4008-4dcf-92c9-906cc8a2a1ad == \9\e\f\b\2\9\9\9\-\4\0\0\8\-\4\d\c\f\-\9\2\c\9\-\9\0\6\c\c\8\a\2\a\1\a\d ]]
00:19:24.865   19:17:55 sma.sma_discovery -- sma/discovery.sh@302 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 29d53783-24d5-4067-a97a-b7b6df495f81 8011
00:19:24.865   19:17:55 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:24.865   19:17:55 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:24.865   19:17:55 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:24.865    19:17:55 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 29d53783-24d5-4067-a97a-b7b6df495f81 8011
00:19:24.865    19:17:55 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=29d53783-24d5-4067-a97a-b7b6df495f81
00:19:24.865    19:17:55 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:24.865    19:17:55 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 29d53783-24d5-4067-a97a-b7b6df495f81
00:19:24.865     19:17:55 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8011
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8011')
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:24.865     19:17:55 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:25.122  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:25.123  I0000 00:00:1733509075.908752  594257 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:25.123  I0000 00:00:1733509075.910565  594257 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:25.123  {}
00:19:25.123    19:17:55 sma.sma_discovery -- sma/discovery.sh@305 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:25.123    19:17:55 sma.sma_discovery -- sma/discovery.sh@305 -- # jq -r '. | length'
00:19:25.380   19:17:56 sma.sma_discovery -- sma/discovery.sh@305 -- # [[ 1 -eq 1 ]]
00:19:25.380    19:17:56 sma.sma_discovery -- sma/discovery.sh@306 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:25.380    19:17:56 sma.sma_discovery -- sma/discovery.sh@306 -- # jq -r '.[].namespaces | length'
00:19:25.637   19:17:56 sma.sma_discovery -- sma/discovery.sh@306 -- # [[ 2 -eq 2 ]]
00:19:25.637   19:17:56 sma.sma_discovery -- sma/discovery.sh@307 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:25.637   19:17:56 sma.sma_discovery -- sma/discovery.sh@307 -- # jq -r '.[].namespaces[].uuid'
00:19:25.637   19:17:56 sma.sma_discovery -- sma/discovery.sh@307 -- # grep 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:25.895  9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:25.895   19:17:56 sma.sma_discovery -- sma/discovery.sh@308 -- # jq -r '.[].namespaces[].uuid'
00:19:25.895   19:17:56 sma.sma_discovery -- sma/discovery.sh@308 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:25.895   19:17:56 sma.sma_discovery -- sma/discovery.sh@308 -- # grep 29d53783-24d5-4067-a97a-b7b6df495f81
00:19:26.153  29d53783-24d5-4067-a97a-b7b6df495f81
00:19:26.153   19:17:57 sma.sma_discovery -- sma/discovery.sh@311 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:26.153   19:17:57 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:26.153    19:17:57 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:26.153    19:17:57 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:26.411  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:26.411  I0000 00:00:1733509077.321905  594551 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:26.411  I0000 00:00:1733509077.323682  594551 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:26.411  [2024-12-06 19:17:57.327746] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:26.411  {}
00:19:26.411   19:17:57 sma.sma_discovery -- sma/discovery.sh@312 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:26.411   19:17:57 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:26.411    19:17:57 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:26.411    19:17:57 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:26.976  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:26.976  I0000 00:00:1733509077.635112  594574 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:26.977  I0000 00:00:1733509077.636829  594574 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:26.977  {}
00:19:26.977   19:17:57 sma.sma_discovery -- sma/discovery.sh@313 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 29d53783-24d5-4067-a97a-b7b6df495f81
00:19:26.977   19:17:57 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:26.977    19:17:57 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 29d53783-24d5-4067-a97a-b7b6df495f81
00:19:26.977    19:17:57 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:27.235  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:27.235  I0000 00:00:1733509077.961855  594604 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:27.235  I0000 00:00:1733509077.963589  594604 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:27.235  {}
00:19:27.235   19:17:58 sma.sma_discovery -- sma/discovery.sh@314 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:27.235   19:17:58 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:27.493  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:27.493  I0000 00:00:1733509078.259985  594632 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:27.493  I0000 00:00:1733509078.261702  594632 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:27.493  {}
00:19:27.493    19:17:58 sma.sma_discovery -- sma/discovery.sh@315 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:27.493    19:17:58 sma.sma_discovery -- sma/discovery.sh@315 -- # jq -r '. | length'
00:19:27.751   19:17:58 sma.sma_discovery -- sma/discovery.sh@315 -- # [[ 0 -eq 0 ]]
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@317 -- # create_device nqn.2016-06.io.spdk:local0
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@317 -- # jq -r .handle
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:19:27.751    19:17:58 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:28.009  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:28.009  I0000 00:00:1733509078.805895  594787 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:28.009  I0000 00:00:1733509078.807664  594787 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:28.009  [2024-12-06 19:17:58.829245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:19:28.009   19:17:58 sma.sma_discovery -- sma/discovery.sh@317 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:28.009   19:17:58 sma.sma_discovery -- sma/discovery.sh@320 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:28.009    19:17:58 sma.sma_discovery -- sma/discovery.sh@320 -- # uuid2base64 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:28.009    19:17:58 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:28.267  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:28.267  I0000 00:00:1733509079.138389  594808 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:28.267  I0000 00:00:1733509079.140111  594808 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:29.642  {}
00:19:29.642    19:18:00 sma.sma_discovery -- sma/discovery.sh@345 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:29.642    19:18:00 sma.sma_discovery -- sma/discovery.sh@345 -- # jq -r '. | length'
00:19:29.642   19:18:00 sma.sma_discovery -- sma/discovery.sh@345 -- # [[ 1 -eq 1 ]]
00:19:29.642   19:18:00 sma.sma_discovery -- sma/discovery.sh@346 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:29.642   19:18:00 sma.sma_discovery -- sma/discovery.sh@346 -- # jq -r '.[].trid.trsvcid'
00:19:29.642   19:18:00 sma.sma_discovery -- sma/discovery.sh@346 -- # grep 8009
00:19:30.209  8009
00:19:30.209    19:18:00 sma.sma_discovery -- sma/discovery.sh@347 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:30.209    19:18:00 sma.sma_discovery -- sma/discovery.sh@347 -- # jq -r '.[].namespaces | length'
00:19:30.209   19:18:01 sma.sma_discovery -- sma/discovery.sh@347 -- # [[ 1 -eq 1 ]]
00:19:30.209    19:18:01 sma.sma_discovery -- sma/discovery.sh@348 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:30.209    19:18:01 sma.sma_discovery -- sma/discovery.sh@348 -- # jq -r '.[].namespaces[0].uuid'
00:19:30.775   19:18:01 sma.sma_discovery -- sma/discovery.sh@348 -- # [[ 2561c86f-ebe6-4293-b80b-086d4bebbc7e == \2\5\6\1\c\8\6\f\-\e\b\e\6\-\4\2\9\3\-\b\8\0\b\-\0\8\6\d\4\b\e\b\b\c\7\e ]]
00:19:30.775   19:18:01 sma.sma_discovery -- sma/discovery.sh@351 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:30.775    19:18:01 sma.sma_discovery -- sma/discovery.sh@351 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:30.775    19:18:01 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:30.775    19:18:01 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:30.775    19:18:01 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:19:30.775   19:18:01 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:31.034  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:31.034  I0000 00:00:1733509081.724910  595313 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:31.034  I0000 00:00:1733509081.726708  595313 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:31.966  Traceback (most recent call last):
00:19:31.966    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:19:31.966      main(sys.argv[1:])
00:19:31.967    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:19:31.967      result = client.call(request['method'], request.get('params', {}))
00:19:31.967               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:31.967    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:19:31.967      response = func(request=json_format.ParseDict(params, input()))
00:19:31.967                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:31.967    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:19:31.967      return _end_unary_response_blocking(state, call, False, None)
00:19:31.967             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:31.967    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:19:31.967      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:19:31.967      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:31.967  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:19:31.967  	status = StatusCode.INVALID_ARGUMENT
00:19:31.967  	details = "Unexpected subsystem NQN"
00:19:31.967  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-06T19:18:02.857418547+01:00", grpc_status:3, grpc_message:"Unexpected subsystem NQN"}"
00:19:31.967  >
00:19:31.967   19:18:02 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:19:31.967   19:18:02 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:31.967   19:18:02 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:31.967   19:18:02 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:31.967    19:18:02 sma.sma_discovery -- sma/discovery.sh@377 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:31.967    19:18:02 sma.sma_discovery -- sma/discovery.sh@377 -- # jq -r '. | length'
00:19:32.533   19:18:03 sma.sma_discovery -- sma/discovery.sh@377 -- # [[ 1 -eq 1 ]]
00:19:32.533   19:18:03 sma.sma_discovery -- sma/discovery.sh@378 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:32.533   19:18:03 sma.sma_discovery -- sma/discovery.sh@378 -- # jq -r '.[].trid.trsvcid'
00:19:32.533   19:18:03 sma.sma_discovery -- sma/discovery.sh@378 -- # grep 8009
00:19:32.533  8009
00:19:32.533    19:18:03 sma.sma_discovery -- sma/discovery.sh@379 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:32.533    19:18:03 sma.sma_discovery -- sma/discovery.sh@379 -- # jq -r '.[].namespaces | length'
00:19:32.789   19:18:03 sma.sma_discovery -- sma/discovery.sh@379 -- # [[ 1 -eq 1 ]]
00:19:32.789    19:18:03 sma.sma_discovery -- sma/discovery.sh@380 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:33.045    19:18:03 sma.sma_discovery -- sma/discovery.sh@380 -- # jq -r '.[].namespaces[0].uuid'
00:19:33.302   19:18:04 sma.sma_discovery -- sma/discovery.sh@380 -- # [[ 2561c86f-ebe6-4293-b80b-086d4bebbc7e == \2\5\6\1\c\8\6\f\-\e\b\e\6\-\4\2\9\3\-\b\8\0\b\-\0\8\6\d\4\b\e\b\b\c\7\e ]]
00:19:33.302   19:18:04 sma.sma_discovery -- sma/discovery.sh@383 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:33.302    19:18:04 sma.sma_discovery -- sma/discovery.sh@383 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:33.302    19:18:04 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:33.302    19:18:04 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:33.302    19:18:04 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:19:33.302   19:18:04 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:33.559  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:33.559  I0000 00:00:1733509084.360845  595664 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:33.559  I0000 00:00:1733509084.362725  595664 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:38.990  [2024-12-06 19:18:09.390872] bdev_nvme.c:7603:discovery_poller: *ERROR*: Discovery[127.0.0.1:8010] timed out while attaching NVM ctrlrs
00:19:38.990  Traceback (most recent call last):
00:19:38.990    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:19:38.990      main(sys.argv[1:])
00:19:38.990    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:19:38.990      result = client.call(request['method'], request.get('params', {}))
00:19:38.990               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:38.990    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:19:38.990      response = func(request=json_format.ParseDict(params, input()))
00:19:38.990                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:38.990    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:19:38.990      return _end_unary_response_blocking(state, call, False, None)
00:19:38.990             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:38.990    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:19:38.990      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:19:38.990      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:38.990  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:19:38.990  	status = StatusCode.INTERNAL
00:19:38.990  	details = "Failed to start discovery"
00:19:38.990  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Failed to start discovery", grpc_status:13, created_time:"2024-12-06T19:18:09.393201661+01:00"}"
00:19:38.990  >
00:19:38.990   19:18:09 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:19:38.990   19:18:09 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:38.990   19:18:09 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:38.990   19:18:09 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:38.990    19:18:09 sma.sma_discovery -- sma/discovery.sh@408 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:38.990    19:18:09 sma.sma_discovery -- sma/discovery.sh@408 -- # jq -r '. | length'
00:19:38.990   19:18:09 sma.sma_discovery -- sma/discovery.sh@408 -- # [[ 1 -eq 1 ]]
00:19:38.990   19:18:09 sma.sma_discovery -- sma/discovery.sh@409 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:38.990   19:18:09 sma.sma_discovery -- sma/discovery.sh@409 -- # jq -r '.[].trid.trsvcid'
00:19:38.990   19:18:09 sma.sma_discovery -- sma/discovery.sh@409 -- # grep 8009
00:19:39.249  8009
00:19:39.249    19:18:09 sma.sma_discovery -- sma/discovery.sh@410 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:39.249    19:18:09 sma.sma_discovery -- sma/discovery.sh@410 -- # jq -r '.[].namespaces | length'
00:19:39.506   19:18:10 sma.sma_discovery -- sma/discovery.sh@410 -- # [[ 1 -eq 1 ]]
00:19:39.506    19:18:10 sma.sma_discovery -- sma/discovery.sh@411 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:39.506    19:18:10 sma.sma_discovery -- sma/discovery.sh@411 -- # jq -r '.[].namespaces[0].uuid'
00:19:39.764   19:18:10 sma.sma_discovery -- sma/discovery.sh@411 -- # [[ 2561c86f-ebe6-4293-b80b-086d4bebbc7e == \2\5\6\1\c\8\6\f\-\e\b\e\6\-\4\2\9\3\-\b\8\0\b\-\0\8\6\d\4\b\e\b\b\c\7\e ]]
00:19:39.764    19:18:10 sma.sma_discovery -- sma/discovery.sh@414 -- # uuidgen
00:19:39.764   19:18:10 sma.sma_discovery -- sma/discovery.sh@414 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 f3a6b1c9-0159-4e19-ba55-a6a6a2809013 8008
00:19:39.764   19:18:10 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:19:39.764   19:18:10 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 f3a6b1c9-0159-4e19-ba55-a6a6a2809013 8008
00:19:39.764   19:18:10 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:19:39.764   19:18:10 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:39.764    19:18:10 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:19:39.764   19:18:10 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:39.764   19:18:10 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 f3a6b1c9-0159-4e19-ba55-a6a6a2809013 8008
00:19:39.764   19:18:10 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:39.764   19:18:10 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:39.764   19:18:10 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:39.764    19:18:10 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume f3a6b1c9-0159-4e19-ba55-a6a6a2809013 8008
00:19:39.764    19:18:10 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=f3a6b1c9-0159-4e19-ba55-a6a6a2809013
00:19:39.764    19:18:10 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:39.764    19:18:10 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 f3a6b1c9-0159-4e19-ba55-a6a6a2809013
00:19:39.764     19:18:10 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8008
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8008')
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:39.764     19:18:10 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:40.021  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:40.021  I0000 00:00:1733509090.846897  597012 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:40.021  I0000 00:00:1733509090.848853  597012 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:40.953  [2024-12-06 19:18:11.864491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:19:40.953  [2024-12-06 19:18:11.864577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f9a80 with addr=127.0.0.1, port=8008
00:19:40.953  [2024-12-06 19:18:11.864647] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:19:40.953  [2024-12-06 19:18:11.864670] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:19:40.953  [2024-12-06 19:18:11.864689] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:19:42.327  [2024-12-06 19:18:12.866998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:19:42.327  [2024-12-06 19:18:12.867086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f9d00 with addr=127.0.0.1, port=8008
00:19:42.327  [2024-12-06 19:18:12.867181] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:19:42.327  [2024-12-06 19:18:12.867217] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:19:42.327  [2024-12-06 19:18:12.867236] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:19:43.261  [2024-12-06 19:18:13.869314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:19:43.261  [2024-12-06 19:18:13.869375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f9f80 with addr=127.0.0.1, port=8008
00:19:43.261  [2024-12-06 19:18:13.869458] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:19:43.261  [2024-12-06 19:18:13.869479] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:19:43.261  [2024-12-06 19:18:13.869496] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:19:44.195  [2024-12-06 19:18:14.871765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:19:44.195  [2024-12-06 19:18:14.871824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fa200 with addr=127.0.0.1, port=8008
00:19:44.195  [2024-12-06 19:18:14.871884] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:19:44.195  [2024-12-06 19:18:14.871903] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:19:44.195  [2024-12-06 19:18:14.871931] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:19:45.128  [2024-12-06 19:18:15.874067] bdev_nvme.c:7553:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] timed out while attaching discovery ctrlr
00:19:45.128  Traceback (most recent call last):
00:19:45.128    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:19:45.128      main(sys.argv[1:])
00:19:45.128    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:19:45.128      result = client.call(request['method'], request.get('params', {}))
00:19:45.128               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:45.128    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:19:45.128      response = func(request=json_format.ParseDict(params, input()))
00:19:45.128                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:45.128    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:19:45.128      return _end_unary_response_blocking(state, call, False, None)
00:19:45.128             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:45.128    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:19:45.128      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:19:45.128      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:19:45.128  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:19:45.128  	status = StatusCode.INTERNAL
00:19:45.128  	details = "Failed to start discovery"
00:19:45.128  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Failed to start discovery", grpc_status:13, created_time:"2024-12-06T19:18:15.876249593+01:00"}"
00:19:45.128  >
00:19:45.128   19:18:15 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:19:45.128   19:18:15 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:45.128   19:18:15 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:45.128   19:18:15 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:45.128    19:18:15 sma.sma_discovery -- sma/discovery.sh@415 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:45.128    19:18:15 sma.sma_discovery -- sma/discovery.sh@415 -- # jq -r '. | length'
00:19:45.386   19:18:16 sma.sma_discovery -- sma/discovery.sh@415 -- # [[ 1 -eq 1 ]]
00:19:45.386   19:18:16 sma.sma_discovery -- sma/discovery.sh@416 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:45.386   19:18:16 sma.sma_discovery -- sma/discovery.sh@416 -- # jq -r '.[].trid.trsvcid'
00:19:45.386   19:18:16 sma.sma_discovery -- sma/discovery.sh@416 -- # grep 8009
00:19:45.643  8009
00:19:45.643   19:18:16 sma.sma_discovery -- sma/discovery.sh@420 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node1 1
00:19:45.902   19:18:16 sma.sma_discovery -- sma/discovery.sh@422 -- # sleep 2
00:19:46.159  WARNING:spdk.sma.volume.volume:Found disconnected volume: 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:48.058    19:18:18 sma.sma_discovery -- sma/discovery.sh@423 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:48.058    19:18:18 sma.sma_discovery -- sma/discovery.sh@423 -- # jq -r '. | length'
00:19:48.315   19:18:19 sma.sma_discovery -- sma/discovery.sh@423 -- # [[ 0 -eq 0 ]]
00:19:48.315   19:18:19 sma.sma_discovery -- sma/discovery.sh@424 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node1 2561c86f-ebe6-4293-b80b-086d4bebbc7e
00:19:48.573   19:18:19 sma.sma_discovery -- sma/discovery.sh@428 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8010
00:19:48.573   19:18:19 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:48.573   19:18:19 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:48.574   19:18:19 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:48.574    19:18:19 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 9efb2999-4008-4dcf-92c9-906cc8a2a1ad 8010
00:19:48.574    19:18:19 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:48.574    19:18:19 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:48.574    19:18:19 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:48.574     19:18:19 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:48.574     19:18:19 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:48.831  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:48.832  I0000 00:00:1733509099.614690  598006 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:48.832  I0000 00:00:1733509099.616390  598006 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:50.203  {}
00:19:50.203   19:18:20 sma.sma_discovery -- sma/discovery.sh@429 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 29d53783-24d5-4067-a97a-b7b6df495f81 8010
00:19:50.203   19:18:20 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:50.203   19:18:20 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:19:50.203   19:18:20 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:50.203    19:18:20 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 29d53783-24d5-4067-a97a-b7b6df495f81 8010
00:19:50.203    19:18:20 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=29d53783-24d5-4067-a97a-b7b6df495f81
00:19:50.203    19:18:20 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:19:50.203    19:18:20 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 29d53783-24d5-4067-a97a-b7b6df495f81
00:19:50.203     19:18:20 sma.sma_discovery -- sma/common.sh@20 -- # python
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:19:50.203     19:18:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:19:50.203  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:50.203  I0000 00:00:1733509101.119885  598284 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:50.203  I0000 00:00:1733509101.121552  598284 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:50.503  {}
00:19:50.503    19:18:21 sma.sma_discovery -- sma/discovery.sh@430 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:50.503    19:18:21 sma.sma_discovery -- sma/discovery.sh@430 -- # jq -r '.[].namespaces | length'
00:19:50.503   19:18:21 sma.sma_discovery -- sma/discovery.sh@430 -- # [[ 2 -eq 2 ]]
00:19:50.503    19:18:21 sma.sma_discovery -- sma/discovery.sh@431 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:50.503    19:18:21 sma.sma_discovery -- sma/discovery.sh@431 -- # jq -r '. | length'
00:19:51.067   19:18:21 sma.sma_discovery -- sma/discovery.sh@431 -- # [[ 1 -eq 1 ]]
00:19:51.067   19:18:21 sma.sma_discovery -- sma/discovery.sh@432 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 2
00:19:51.067   19:18:22 sma.sma_discovery -- sma/discovery.sh@434 -- # sleep 2
00:19:51.999  WARNING:spdk.sma.volume.volume:Found disconnected volume: 29d53783-24d5-4067-a97a-b7b6df495f81
00:19:53.373    19:18:24 sma.sma_discovery -- sma/discovery.sh@436 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:53.373    19:18:24 sma.sma_discovery -- sma/discovery.sh@436 -- # jq -r '.[].namespaces | length'
00:19:53.373   19:18:24 sma.sma_discovery -- sma/discovery.sh@436 -- # [[ 1 -eq 1 ]]
00:19:53.373    19:18:24 sma.sma_discovery -- sma/discovery.sh@437 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:53.373    19:18:24 sma.sma_discovery -- sma/discovery.sh@437 -- # jq -r '. | length'
00:19:53.631   19:18:24 sma.sma_discovery -- sma/discovery.sh@437 -- # [[ 1 -eq 1 ]]
00:19:53.631   19:18:24 sma.sma_discovery -- sma/discovery.sh@438 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 1
00:19:54.197   19:18:24 sma.sma_discovery -- sma/discovery.sh@440 -- # sleep 2
00:19:55.127  WARNING:spdk.sma.volume.volume:Found disconnected volume: 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:56.061    19:18:26 sma.sma_discovery -- sma/discovery.sh@442 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:19:56.061    19:18:26 sma.sma_discovery -- sma/discovery.sh@442 -- # jq -r '.[].namespaces | length'
00:19:56.319   19:18:27 sma.sma_discovery -- sma/discovery.sh@442 -- # [[ 0 -eq 0 ]]
00:19:56.319    19:18:27 sma.sma_discovery -- sma/discovery.sh@443 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:19:56.319    19:18:27 sma.sma_discovery -- sma/discovery.sh@443 -- # jq -r '. | length'
00:19:56.577   19:18:27 sma.sma_discovery -- sma/discovery.sh@443 -- # [[ 0 -eq 0 ]]
00:19:56.578   19:18:27 sma.sma_discovery -- sma/discovery.sh@444 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 9efb2999-4008-4dcf-92c9-906cc8a2a1ad
00:19:56.835   19:18:27 sma.sma_discovery -- sma/discovery.sh@445 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 29d53783-24d5-4067-a97a-b7b6df495f81
00:19:57.093   19:18:27 sma.sma_discovery -- sma/discovery.sh@447 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:19:57.093   19:18:27 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:19:57.351  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:19:57.351  I0000 00:00:1733509108.232390  599158 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:19:57.351  I0000 00:00:1733509108.234192  599158 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:19:57.351  {}
00:19:57.351   19:18:28 sma.sma_discovery -- sma/discovery.sh@449 -- # cleanup
00:19:57.351   19:18:28 sma.sma_discovery -- sma/discovery.sh@27 -- # killprocess 590483
00:19:57.351   19:18:28 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 590483 ']'
00:19:57.351   19:18:28 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 590483
00:19:57.351    19:18:28 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:19:57.351   19:18:28 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:57.351    19:18:28 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590483
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=python3
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590483'
00:19:57.610  killing process with pid 590483
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 590483
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 590483
00:19:57.610   19:18:28 sma.sma_discovery -- sma/discovery.sh@28 -- # killprocess 590482
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 590482 ']'
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 590482
00:19:57.610    19:18:28 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:57.610    19:18:28 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590482
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590482'
00:19:57.610  killing process with pid 590482
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 590482
00:19:57.610   19:18:28 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 590482
00:19:59.511   19:18:30 sma.sma_discovery -- sma/discovery.sh@29 -- # killprocess 590480
00:19:59.511   19:18:30 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 590480 ']'
00:19:59.511   19:18:30 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 590480
00:19:59.511    19:18:30 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:19:59.511   19:18:30 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:59.511    19:18:30 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590480
00:19:59.769   19:18:30 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:59.769   19:18:30 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:59.769   19:18:30 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590480'
00:19:59.769  killing process with pid 590480
00:19:59.769   19:18:30 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 590480
00:19:59.769   19:18:30 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 590480
00:20:02.294   19:18:32 sma.sma_discovery -- sma/discovery.sh@30 -- # killprocess 590481
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 590481 ']'
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 590481
00:20:02.294    19:18:32 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:02.294    19:18:32 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590481
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590481'
00:20:02.294  killing process with pid 590481
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 590481
00:20:02.294   19:18:32 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 590481
00:20:04.265   19:18:34 sma.sma_discovery -- sma/discovery.sh@450 -- # trap - SIGINT SIGTERM EXIT
00:20:04.265  
00:20:04.265  real	1m6.541s
00:20:04.265  user	3m32.997s
00:20:04.265  sys	0m11.170s
00:20:04.265   19:18:34 sma.sma_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:04.265   19:18:34 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:20:04.265  ************************************
00:20:04.265  END TEST sma_discovery
00:20:04.265  ************************************
00:20:04.265   19:18:34 sma -- sma/sma.sh@15 -- # run_test sma_vhost /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:20:04.265   19:18:34 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:04.265   19:18:34 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:04.265   19:18:34 sma -- common/autotest_common.sh@10 -- # set +x
00:20:04.265  ************************************
00:20:04.265  START TEST sma_vhost
00:20:04.265  ************************************
00:20:04.265   19:18:34 sma.sma_vhost -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:20:04.265  * Looking for test storage...
00:20:04.265  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:20:04.265    19:18:34 sma.sma_vhost -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest_common.sh@1711 -- # lcov --version
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:04.265    19:18:34 sma.sma_vhost -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@336 -- # IFS=.-:
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@336 -- # read -ra ver1
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@337 -- # IFS=.-:
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@337 -- # read -ra ver2
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@338 -- # local 'op=<'
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@340 -- # ver1_l=2
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@341 -- # ver2_l=1
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@344 -- # case "$op" in
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@345 -- # : 1
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@365 -- # decimal 1
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@353 -- # local d=1
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@355 -- # echo 1
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@365 -- # ver1[v]=1
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@366 -- # decimal 2
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@353 -- # local d=2
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:04.265     19:18:34 sma.sma_vhost -- scripts/common.sh@355 -- # echo 2
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@366 -- # ver2[v]=2
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:04.265    19:18:34 sma.sma_vhost -- scripts/common.sh@368 -- # return 0
00:20:04.265    19:18:34 sma.sma_vhost -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:04.265    19:18:34 sma.sma_vhost -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:04.265  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.265  		--rc genhtml_branch_coverage=1
00:20:04.265  		--rc genhtml_function_coverage=1
00:20:04.265  		--rc genhtml_legend=1
00:20:04.265  		--rc geninfo_all_blocks=1
00:20:04.265  		--rc geninfo_unexecuted_blocks=1
00:20:04.265  		
00:20:04.265  		'
00:20:04.265    19:18:34 sma.sma_vhost -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:04.265  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.265  		--rc genhtml_branch_coverage=1
00:20:04.265  		--rc genhtml_function_coverage=1
00:20:04.265  		--rc genhtml_legend=1
00:20:04.265  		--rc geninfo_all_blocks=1
00:20:04.265  		--rc geninfo_unexecuted_blocks=1
00:20:04.265  		
00:20:04.265  		'
00:20:04.265    19:18:34 sma.sma_vhost -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:04.265  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.265  		--rc genhtml_branch_coverage=1
00:20:04.265  		--rc genhtml_function_coverage=1
00:20:04.265  		--rc genhtml_legend=1
00:20:04.265  		--rc geninfo_all_blocks=1
00:20:04.265  		--rc geninfo_unexecuted_blocks=1
00:20:04.265  		
00:20:04.265  		'
00:20:04.265    19:18:34 sma.sma_vhost -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:04.265  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.265  		--rc genhtml_branch_coverage=1
00:20:04.265  		--rc genhtml_function_coverage=1
00:20:04.265  		--rc genhtml_legend=1
00:20:04.265  		--rc geninfo_all_blocks=1
00:20:04.265  		--rc geninfo_unexecuted_blocks=1
00:20:04.265  		
00:20:04.265  		'
00:20:04.265   19:18:34 sma.sma_vhost -- sma/vhost_blk.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@6 -- # : false
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@7 -- # : /root/vhost_test
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@9 -- # : qemu-img
00:20:04.265     19:18:34 sma.sma_vhost -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:20:04.265      19:18:34 sma.sma_vhost -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:20:04.265     19:18:34 sma.sma_vhost -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:20:04.265    19:18:34 sma.sma_vhost -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@2 -- # vhost_0_main_core=0
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:20:04.265     19:18:34 sma.sma_vhost -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:20:04.266     19:18:34 sma.sma_vhost -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:20:04.266    19:18:34 sma.sma_vhost -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:20:04.266     19:18:34 sma.sma_vhost -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:20:04.266     19:18:34 sma.sma_vhost -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:20:04.266     19:18:34 sma.sma_vhost -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:20:04.266     19:18:34 sma.sma_vhost -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:20:04.266     19:18:34 sma.sma_vhost -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:20:04.266     19:18:34 sma.sma_vhost -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:20:04.266      19:18:34 sma.sma_vhost -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:20:04.266       19:18:34 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # check_cgroup
00:20:04.266       19:18:34 sma.sma_vhost -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:20:04.266       19:18:34 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:20:04.266       19:18:34 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # echo 2
00:20:04.266      19:18:34 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:20:04.266   19:18:34 sma.sma_vhost -- sma/vhost_blk.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:20:04.266   19:18:34 sma.sma_vhost -- sma/vhost_blk.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:20:04.266   19:18:34 sma.sma_vhost -- sma/vhost_blk.sh@49 -- # vm_no=0
00:20:04.266   19:18:34 sma.sma_vhost -- sma/vhost_blk.sh@50 -- # bus_size=32
00:20:04.266   19:18:34 sma.sma_vhost -- sma/vhost_blk.sh@52 -- # timing_enter setup_vm
00:20:04.266   19:18:34 sma.sma_vhost -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:04.266   19:18:34 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:04.266   19:18:34 sma.sma_vhost -- sma/vhost_blk.sh@54 -- # vm_setup --force=0 --disk-type=virtio '--qemu-args=-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@518 -- # xtrace_disable
00:20:04.266   19:18:34 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:04.266  INFO: Creating new VM in /root/vhost_test/vms/0
00:20:04.266  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:20:04.266  INFO: TASK MASK: 1-2
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@671 -- # local node_num=0
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@672 -- # local boot_disk_present=false
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:20:04.266  INFO: NUMA NODE: 0
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@677 -- # [[ -n '' ]]
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@686 -- # [[ -z '' ]]
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@701 -- # IFS=,
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@701 -- # read -r disk disk_type _
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@702 -- # [[ -z '' ]]
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@702 -- # disk_type=virtio
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@704 -- # case $disk_type in
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:20:04.266  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:20:04.266   19:18:34 sma.sma_vhost -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:20:04.529  1024+0 records in
00:20:04.529  1024+0 records out
00:20:04.529  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.47662 s, 2.3 GB/s
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@780 -- # [[ -n '' ]]
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@785 -- # (( 1 ))
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:20:04.529  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@787 -- # cat
00:20:04.529    19:18:35 sma.sma_vhost -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@827 -- # echo 10000
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@828 -- # echo 10001
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@829 -- # echo 10002
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@832 -- # [[ -z '' ]]
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@834 -- # echo 10004
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@835 -- # echo 100
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@837 -- # [[ -z '' ]]
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@838 -- # [[ -z '' ]]
00:20:04.529   19:18:35 sma.sma_vhost -- sma/vhost_blk.sh@59 -- # vm_run 0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@843 -- # local run_all=false
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@844 -- # local vms_to_run=
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@846 -- # getopts a-: optchar
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@856 -- # false
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@859 -- # shift 0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@860 -- # for vm in "$@"
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@871 -- # vm_is_running 0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@373 -- # return 1
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:20:04.529  INFO: running /root/vhost_test/vms/0/run.sh
00:20:04.529   19:18:35 sma.sma_vhost -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:20:04.529  Running VM in /root/vhost_test/vms/0
00:20:05.461  Waiting for QEMU pid file
00:20:06.393  === qemu.log ===
00:20:06.393  === qemu.log ===
00:20:06.393   19:18:37 sma.sma_vhost -- sma/vhost_blk.sh@60 -- # vm_wait_for_boot 300 0
00:20:06.393   19:18:37 sma.sma_vhost -- vhost/common.sh@913 -- # assert_number 300
00:20:06.393   19:18:37 sma.sma_vhost -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:20:06.393   19:18:37 sma.sma_vhost -- vhost/common.sh@281 -- # return 0
00:20:06.393   19:18:37 sma.sma_vhost -- vhost/common.sh@915 -- # xtrace_disable
00:20:06.393   19:18:37 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:06.393  INFO: Waiting for VMs to boot
00:20:06.393  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:20:28.401  
00:20:28.401  INFO: VM0 ready
00:20:28.401  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:20:28.401  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:20:28.401  INFO: all VMs ready
00:20:28.401   19:18:58 sma.sma_vhost -- vhost/common.sh@973 -- # return 0
00:20:28.401   19:18:58 sma.sma_vhost -- sma/vhost_blk.sh@61 -- # timing_exit setup_vm
00:20:28.401   19:18:58 sma.sma_vhost -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:28.401   19:18:58 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:28.401   19:18:58 sma.sma_vhost -- sma/vhost_blk.sh@64 -- # vhostpid=602974
00:20:28.402   19:18:58 sma.sma_vhost -- sma/vhost_blk.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/vhost -S /var/tmp -m 0x3 --wait-for-rpc
00:20:28.402   19:18:58 sma.sma_vhost -- sma/vhost_blk.sh@66 -- # waitforlisten 602974
00:20:28.402   19:18:58 sma.sma_vhost -- common/autotest_common.sh@835 -- # '[' -z 602974 ']'
00:20:28.402   19:18:58 sma.sma_vhost -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:28.402   19:18:58 sma.sma_vhost -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:28.402   19:18:58 sma.sma_vhost -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:28.402  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:28.402   19:18:58 sma.sma_vhost -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:28.402   19:18:58 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:28.402  [2024-12-06 19:18:58.671467] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:20:28.402  [2024-12-06 19:18:58.671602] [ DPDK EAL parameters: vhost --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602974 ]
00:20:28.402  EAL: No free 2048 kB hugepages reported on node 1
00:20:28.402  [2024-12-06 19:18:58.801360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:28.402  [2024-12-06 19:18:58.918095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:28.402  [2024-12-06 19:18:58.918097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@868 -- # return 0
00:20:28.967   19:18:59 sma.sma_vhost -- sma/vhost_blk.sh@69 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.967   19:18:59 sma.sma_vhost -- sma/vhost_blk.sh@70 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:28.967  [2024-12-06 19:18:59.648948] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.967   19:18:59 sma.sma_vhost -- sma/vhost_blk.sh@71 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:28.967  [2024-12-06 19:18:59.656978] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.967   19:18:59 sma.sma_vhost -- sma/vhost_blk.sh@72 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:28.967  [2024-12-06 19:18:59.665002] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.967   19:18:59 sma.sma_vhost -- sma/vhost_blk.sh@73 -- # rpc_cmd framework_start_init
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.967   19:18:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:28.967  [2024-12-06 19:18:59.866371] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:20:29.225   19:19:00 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:29.225   19:19:00 sma.sma_vhost -- sma/vhost_blk.sh@93 -- # smapid=603117
00:20:29.225   19:19:00 sma.sma_vhost -- sma/vhost_blk.sh@96 -- # sma_waitforlisten
00:20:29.225   19:19:00 sma.sma_vhost -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:20:29.225   19:19:00 sma.sma_vhost -- sma/common.sh@8 -- # local sma_port=8080
00:20:29.225   19:19:00 sma.sma_vhost -- sma/common.sh@10 -- # (( i = 0 ))
00:20:29.225   19:19:00 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:20:29.225   19:19:00 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:20:29.225    19:19:00 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # cat
00:20:29.225   19:19:00 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:20:29.225   19:19:00 sma.sma_vhost -- sma/common.sh@14 -- # sleep 1s
00:20:29.482  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:29.482  I0000 00:00:1733509140.329690  603117 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:30.415   19:19:01 sma.sma_vhost -- sma/common.sh@10 -- # (( i++ ))
00:20:30.415   19:19:01 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:20:30.415   19:19:01 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:20:30.415   19:19:01 sma.sma_vhost -- sma/common.sh@12 -- # return 0
00:20:30.415    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:20:30.415    19:19:01 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:20:30.415    19:19:01 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:30.415    19:19:01 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:30.415    19:19:01 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:20:30.415    19:19:01 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:20:30.415     19:19:01 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:20:30.415     19:19:01 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:20:30.415     19:19:01 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:30.415     19:19:01 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:30.415     19:19:01 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:20:30.415     19:19:01 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:20:30.415    19:19:01 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:20:30.415  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:20:30.673   19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # [[ 0 -eq 0 ]]
00:20:30.673   19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@102 -- # rpc_cmd bdev_null_create null0 100 4096
00:20:30.673   19:19:01 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:30.673   19:19:01 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:30.673  null0
00:20:30.673   19:19:01 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:30.673   19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@103 -- # rpc_cmd bdev_null_create null1 100 4096
00:20:30.673   19:19:01 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:30.673   19:19:01 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:30.931  null1
00:20:30.931   19:19:01 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:30.931    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # jq -r '.[].uuid'
00:20:30.931    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # rpc_cmd bdev_get_bdevs -b null0
00:20:30.931    19:19:01 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:30.931    19:19:01 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:30.931    19:19:01 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:30.931   19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # uuid=946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:30.931    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # rpc_cmd bdev_get_bdevs -b null1
00:20:30.931    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # jq -r '.[].uuid'
00:20:30.931    19:19:01 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:30.931    19:19:01 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:30.931    19:19:01 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:30.931   19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # uuid2=f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:30.932    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # create_device 0 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:30.932    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:30.932    19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # jq -r .handle
00:20:30.932     19:19:01 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:30.932     19:19:01 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:31.189  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:31.189  I0000 00:00:1733509142.002533  603304 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:31.189  I0000 00:00:1733509142.004256  603304 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:31.189  I0000 00:00:1733509142.005764  603425 subchannel.cc:806] subchannel 0x55ffb9477560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ffb948df20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ffb94446e0, grpc.internal.client_channel_call_destination=0x7f6dfbcff390, grpc.internal.event_engine=0x55ffb94735b0, grpc.internal.security_connector=0x55ffb9473540, grpc.internal.subchannel_pool=0x55ffb94c7410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ffb9391a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:02.00528843+01:00"}), backing off for 999 ms
00:20:31.189  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 232
00:20:31.189  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:236
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:237
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:32.123  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:20:32.123   19:19:03 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # devid0=virtio_blk:sma-0
00:20:32.123   19:19:03 sma.sma_vhost -- sma/vhost_blk.sh@109 -- # rpc_cmd vhost_get_controllers -n sma-0
00:20:32.123   19:19:03 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:32.123   19:19:03 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:32.381  [
00:20:32.381  {
00:20:32.381  "ctrlr": "sma-0",
00:20:32.381  "cpumask": "0x3",
00:20:32.381  "delay_base_us": 0,
00:20:32.381  "iops_threshold": 60000,
00:20:32.381  "socket": "/var/tmp/sma-0",
00:20:32.381  "sessions": [
00:20:32.381  {
00:20:32.381  "vid": 0,
00:20:32.381  "id": 0,
00:20:32.381  "name": "sma-0s0",
00:20:32.381  "started": false,
00:20:32.381  "max_queues": 0,
00:20:32.381  "inflight_task_cnt": 0
00:20:32.381  }
00:20:32.381  ],
00:20:32.381  "backend_specific": {
00:20:32.381  "block": {
00:20:32.381  "readonly": false,
00:20:32.381  "bdev": "null0",
00:20:32.381  "transport": "vhost_user_blk"
00:20:32.381  }
00:20:32.381  }
00:20:32.381  }
00:20:32.381  ]
00:20:32.381   19:19:03 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:32.381    19:19:03 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # create_device 1 f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:32.381    19:19:03 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # jq -r .handle
00:20:32.381    19:19:03 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:32.381     19:19:03 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:32.381     19:19:03 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 238
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:236
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fdd2fe00000
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7febbee00000
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7febbee00000
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:239
00:20:32.381  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:240
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:20:32.640  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:32.640  I0000 00:00:1733509143.345011  603584 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:32.640  I0000 00:00:1733509143.346770  603584 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:32.640  I0000 00:00:1733509143.348405  603590 subchannel.cc:806] subchannel 0x55f98b4b2560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f98b4c8f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f98b47f6e0, grpc.internal.client_channel_call_destination=0x7fe00a497390, grpc.internal.event_engine=0x55f98b4ae5b0, grpc.internal.security_connector=0x55f98b3f2d60, grpc.internal.subchannel_pool=0x55f98b502410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f98b3cca60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:03.347904643+01:00"}), backing off for 1000 ms
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-1) vhost-user server: socket created, fd: 243
00:20:32.640  VHOST_CONFIG: (/var/tmp/sma-1) binding succeeded
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) new vhost user connection is 241
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) new device, handle is 1
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Vhost-user protocol features: 0x11ebf
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_QUEUE_NUM
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_BACKEND_REQ_FD
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_OWNER
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:245
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:246
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_CONFIG
00:20:33.575   19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # devid1=virtio_blk:sma-1
00:20:33.575   19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@112 -- # rpc_cmd vhost_get_controllers -n sma-0
00:20:33.575   19:19:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:33.575   19:19:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:33.575  [
00:20:33.575  {
00:20:33.575  "ctrlr": "sma-0",
00:20:33.575  "cpumask": "0x3",
00:20:33.575  "delay_base_us": 0,
00:20:33.575  "iops_threshold": 60000,
00:20:33.575  "socket": "/var/tmp/sma-0",
00:20:33.575  "sessions": [
00:20:33.575  {
00:20:33.575  "vid": 0,
00:20:33.575  "id": 0,
00:20:33.575  "name": "sma-0s0",
00:20:33.575  "started": true,
00:20:33.575  "max_queues": 2,
00:20:33.575  "inflight_task_cnt": 0
00:20:33.575  }
00:20:33.575  ],
00:20:33.575  "backend_specific": {
00:20:33.575  "block": {
00:20:33.575  "readonly": false,
00:20:33.575  "bdev": "null0",
00:20:33.575  "transport": "vhost_user_blk"
00:20:33.575  }
00:20:33.575  }
00:20:33.575  }
00:20:33.575  ]
00:20:33.575   19:19:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:33.575   19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@113 -- # rpc_cmd vhost_get_controllers -n sma-1
00:20:33.575   19:19:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:33.575   19:19:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:33.575  [
00:20:33.575  {
00:20:33.575  "ctrlr": "sma-1",
00:20:33.575  "cpumask": "0x3",
00:20:33.575  "delay_base_us": 0,
00:20:33.575  "iops_threshold": 60000,
00:20:33.575  "socket": "/var/tmp/sma-1",
00:20:33.575  "sessions": [
00:20:33.575  {
00:20:33.575  "vid": 1,
00:20:33.575  "id": 0,
00:20:33.575  "name": "sma-1s1",
00:20:33.575  "started": false,
00:20:33.575  "max_queues": 0,
00:20:33.575  "inflight_task_cnt": 0
00:20:33.575  }
00:20:33.575  ],
00:20:33.575  "backend_specific": {
00:20:33.575  "block": {
00:20:33.575  "readonly": false,
00:20:33.575  "bdev": "null1",
00:20:33.575  "transport": "vhost_user_blk"
00:20:33.575  }
00:20:33.575  }
00:20:33.575  }
00:20:33.575  ]
00:20:33.575   19:19:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:33.575   19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@114 -- # [[ virtio_blk:sma-0 != \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:20:33.575    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # rpc_cmd vhost_get_controllers
00:20:33.575    19:19:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:33.575    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # jq -r '. | length'
00:20:33.575    19:19:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:33.575    19:19:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:20:33.575  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000008):
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_INFLIGHT_FD
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd num_queues: 2
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd queue_size: 128
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_size: 4224
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_offset: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) send inflight fd: 60
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_INFLIGHT_FD
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_size: 4224
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_offset: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd num_queues: 2
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd queue_size: 128
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd fd: 247
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd pervq_inflight_size: 2112
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:60
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:245
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_MEM_TABLE
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) guest memory region size: 0x40000000
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest physical addr: 0x0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest virtual  addr: 0x7fdd2fe00000
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	 host  virtual  addr: 0x7feb7ee00000
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap addr : 0x7feb7ee00000
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap size : 0x40000000
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap align: 0x200000
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap off  : 0x0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:0 file:248
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:1 file:249
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 1
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x0000000f):
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 1
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 1
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 1
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:20:33.576  VHOST_CONFIG: (/var/tmp/sma-1) virtio is now ready for processing.
00:20:33.576   19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # [[ 2 -eq 2 ]]
00:20:33.576    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # create_device 0 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:33.576    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # jq -r .handle
00:20:33.576    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:33.576     19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:33.576     19:19:04 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:33.835  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:33.835  I0000 00:00:1733509144.690374  603758 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:33.835  I0000 00:00:1733509144.692276  603758 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:33.835  I0000 00:00:1733509144.693824  603764 subchannel.cc:806] subchannel 0x5608abe77560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5608abe8df20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5608abe446e0, grpc.internal.client_channel_call_destination=0x7f2d0d526390, grpc.internal.event_engine=0x5608abe735b0, grpc.internal.security_connector=0x5608abe73540, grpc.internal.subchannel_pool=0x5608abec7410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5608abd91a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:04.693340682+01:00"}), backing off for 1000 ms
00:20:33.835   19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # tmp0=virtio_blk:sma-0
00:20:33.835    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # create_device 1 f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:33.835    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # jq -r .handle
00:20:33.835    19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:33.835     19:19:04 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:33.835     19:19:04 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:34.093  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:34.093  I0000 00:00:1733509145.042169  603789 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:34.361  I0000 00:00:1733509145.044275  603789 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:34.361  I0000 00:00:1733509145.045977  603793 subchannel.cc:806] subchannel 0x5573d0bb4560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5573d0bcaf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5573d0b816e0, grpc.internal.client_channel_call_destination=0x7f8d26779390, grpc.internal.event_engine=0x5573d0bb05b0, grpc.internal.security_connector=0x5573d0af4d60, grpc.internal.subchannel_pool=0x5573d0c04410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5573d0acea60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:05.045452622+01:00"}), backing off for 1000 ms
00:20:34.361   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # tmp1=virtio_blk:sma-1
00:20:34.361   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # NOT create_device 1 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:34.361   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # jq -r .handle
00:20:34.361   19:19:05 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:20:34.361   19:19:05 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 1 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:34.361   19:19:05 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=create_device
00:20:34.361   19:19:05 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:34.362    19:19:05 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t create_device
00:20:34.362   19:19:05 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:34.362   19:19:05 sma.sma_vhost -- common/autotest_common.sh@655 -- # create_device 1 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:34.362   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:34.362    19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:34.362    19:19:05 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:34.630  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:34.630  I0000 00:00:1733509145.415047  603816 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:34.630  I0000 00:00:1733509145.417027  603816 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:34.630  I0000 00:00:1733509145.418571  603943 subchannel.cc:806] subchannel 0x5618e63d3560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5618e63e9f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5618e63a06e0, grpc.internal.client_channel_call_destination=0x7f51e626e390, grpc.internal.event_engine=0x5618e63cf5b0, grpc.internal.security_connector=0x5618e6313d60, grpc.internal.subchannel_pool=0x5618e6423410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5618e62eda60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:05.418097678+01:00"}), backing off for 999 ms
00:20:34.630  Traceback (most recent call last):
00:20:34.630    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:34.630      main(sys.argv[1:])
00:20:34.630    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:34.630      result = client.call(request['method'], request.get('params', {}))
00:20:34.630               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:34.630    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:34.630      response = func(request=json_format.ParseDict(params, input()))
00:20:34.630                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:34.630    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:34.630      return _end_unary_response_blocking(state, call, False, None)
00:20:34.630             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:34.630    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:34.630      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:34.630      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:34.630  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:34.630  	status = StatusCode.INTERNAL
00:20:34.630  	details = "Failed to create vhost device"
00:20:34.630  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:05.464861926+01:00", grpc_status:13, grpc_message:"Failed to create vhost device"}"
00:20:34.630  >
00:20:34.630   19:19:05 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:20:34.630   19:19:05 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:34.630   19:19:05 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:34.630   19:19:05 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:34.630    19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:20:34.630    19:19:05 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:20:34.630    19:19:05 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:34.630    19:19:05 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:34.630    19:19:05 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:20:34.630    19:19:05 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:20:34.630     19:19:05 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:20:34.630     19:19:05 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:20:34.630     19:19:05 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:34.630     19:19:05 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:34.630     19:19:05 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:20:34.630     19:19:05 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:20:34.630    19:19:05 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:20:34.630  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:20:34.889   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # [[ 2 -eq 2 ]]
00:20:34.889    19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # rpc_cmd vhost_get_controllers
00:20:34.889    19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # jq -r '. | length'
00:20:34.889    19:19:05 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:34.889    19:19:05 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:34.889    19:19:05 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:34.889   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # [[ 2 -eq 2 ]]
00:20:34.889   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@131 -- # [[ virtio_blk:sma-0 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\0 ]]
00:20:34.889   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@132 -- # [[ virtio_blk:sma-1 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:20:34.889   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@135 -- # delete_device virtio_blk:sma-0
00:20:34.889   19:19:05 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:35.146  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:35.146  I0000 00:00:1733509145.963134  603975 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:35.146  I0000 00:00:1733509145.965075  603975 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:35.147  I0000 00:00:1733509145.966633  603981 subchannel.cc:806] subchannel 0x55ce12668560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ce1267ef20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ce126356e0, grpc.internal.client_channel_call_destination=0x7fc45138a390, grpc.internal.event_engine=0x55ce126645b0, grpc.internal.security_connector=0x55ce12664540, grpc.internal.subchannel_pool=0x55ce126b8410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ce12582a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:05.966169952+01:00"}), backing off for 999 ms
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:50
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:0
00:20:35.147  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:20:35.405  {}
00:20:35.405   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@136 -- # NOT rpc_cmd vhost_get_controllers -n sma-0
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-0
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:35.405    19:19:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-0
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:35.405  request:
00:20:35.405  {
00:20:35.405  "name": "sma-0",
00:20:35.405  "method": "vhost_get_controllers",
00:20:35.405  "req_id": 1
00:20:35.405  }
00:20:35.405  Got JSON-RPC error response
00:20:35.405  response:
00:20:35.405  {
00:20:35.405  "code": -32603,
00:20:35.405  "message": "No such device"
00:20:35.405  }
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:35.405   19:19:06 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:35.405    19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # rpc_cmd vhost_get_controllers
00:20:35.405    19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # jq -r '. | length'
00:20:35.405    19:19:06 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.405    19:19:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:35.405    19:19:06 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:35.405   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # [[ 1 -eq 1 ]]
00:20:35.405   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@139 -- # delete_device virtio_blk:sma-1
00:20:35.405   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:35.665  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:35.665  I0000 00:00:1733509146.421745  604008 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:35.665  I0000 00:00:1733509146.423643  604008 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:35.665  I0000 00:00:1733509146.425221  604119 subchannel.cc:806] subchannel 0x5618691ba560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5618691d0f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5618691876e0, grpc.internal.client_channel_call_destination=0x7f2d5545d390, grpc.internal.event_engine=0x5618691b65b0, grpc.internal.security_connector=0x5618691b6540, grpc.internal.subchannel_pool=0x56186920a410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5618690d4a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:06.424674013+01:00"}), backing off for 1000 ms
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000000):
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 1
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 0
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 0
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 1
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 file:50
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:20:35.665  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 file:0
00:20:35.924  VHOST_CONFIG: (/var/tmp/sma-1) vhost peer closed
00:20:35.924  {}
00:20:35.924   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@140 -- # NOT rpc_cmd vhost_get_controllers -n sma-1
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-1
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:35.924    19:19:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-1
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:35.924  request:
00:20:35.924  {
00:20:35.924  "name": "sma-1",
00:20:35.924  "method": "vhost_get_controllers",
00:20:35.924  "req_id": 1
00:20:35.924  }
00:20:35.924  Got JSON-RPC error response
00:20:35.924  response:
00:20:35.924  {
00:20:35.924  "code": -32603,
00:20:35.924  "message": "No such device"
00:20:35.924  }
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:35.924   19:19:06 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:35.924    19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # rpc_cmd vhost_get_controllers
00:20:35.924    19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # jq -r '. | length'
00:20:35.924    19:19:06 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.924    19:19:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:35.924    19:19:06 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:35.924   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # [[ 0 -eq 0 ]]
00:20:35.924   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@144 -- # delete_device virtio_blk:sma-0
00:20:35.924   19:19:06 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:36.182  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:36.182  I0000 00:00:1733509146.982621  604157 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:36.182  I0000 00:00:1733509146.984351  604157 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:36.182  I0000 00:00:1733509146.985841  604160 subchannel.cc:806] subchannel 0x56509712e560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x565097144f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5650970fb6e0, grpc.internal.client_channel_call_destination=0x7f22354b4390, grpc.internal.event_engine=0x56509712a5b0, grpc.internal.security_connector=0x56509712a540, grpc.internal.subchannel_pool=0x56509717e410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x565097048a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:06.98533423+01:00"}), backing off for 1000 ms
00:20:36.182  {}
00:20:36.182   19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@145 -- # delete_device virtio_blk:sma-1
00:20:36.182   19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:36.441  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:36.441  I0000 00:00:1733509147.258068  604182 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:36.441  I0000 00:00:1733509147.259935  604182 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:36.441  I0000 00:00:1733509147.261527  604188 subchannel.cc:806] subchannel 0x55f37255b560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f372571f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f3725286e0, grpc.internal.client_channel_call_destination=0x7f21f5991390, grpc.internal.event_engine=0x55f3725575b0, grpc.internal.security_connector=0x55f372557540, grpc.internal.subchannel_pool=0x55f3725ab410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f372475a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:07.26099906+01:00"}), backing off for 1000 ms
00:20:36.441  {}
00:20:36.441    19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:20:36.441    19:19:07 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:20:36.441    19:19:07 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:36.441    19:19:07 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:36.441    19:19:07 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:20:36.441    19:19:07 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:20:36.441     19:19:07 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:20:36.441     19:19:07 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:20:36.441     19:19:07 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:36.441     19:19:07 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:36.441     19:19:07 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:20:36.441     19:19:07 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:20:36.441    19:19:07 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:20:36.441  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:20:36.700   19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # [[ 0 -eq 0 ]]
00:20:36.700   19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@150 -- # devids=()
00:20:36.700    19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # rpc_cmd bdev_get_bdevs -b null0
00:20:36.700    19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # jq -r '.[].uuid'
00:20:36.700    19:19:07 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:36.700    19:19:07 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:36.700    19:19:07 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:36.700   19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # uuid=946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:36.700    19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # create_device 0 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:36.700    19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:36.700    19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # jq -r .handle
00:20:36.700     19:19:07 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:36.700     19:19:07 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:36.960  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:36.960  I0000 00:00:1733509147.788398  604302 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:36.960  I0000 00:00:1733509147.790317  604302 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:36.960  I0000 00:00:1733509147.791937  604347 subchannel.cc:806] subchannel 0x5611a8309560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5611a831ff20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5611a82d66e0, grpc.internal.client_channel_call_destination=0x7feba577e390, grpc.internal.event_engine=0x5611a83055b0, grpc.internal.security_connector=0x5611a8305540, grpc.internal.subchannel_pool=0x5611a8359410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5611a8223a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:07.791414012+01:00"}), backing off for 1000 ms
00:20:36.960  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 232
00:20:36.960  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:236
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:237
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:37.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:20:37.898   19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # devids[0]=virtio_blk:sma-0
00:20:37.898    19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # rpc_cmd bdev_get_bdevs -b null1
00:20:37.898    19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # jq -r '.[].uuid'
00:20:37.899    19:19:08 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:37.899    19:19:08 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:37.899    19:19:08 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:37.899   19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # uuid=f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:37.899    19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # create_device 32 f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:37.899    19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # jq -r .handle
00:20:37.899    19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:37.899     19:19:08 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 f87f3390-26bb-49ca-82f4-972f7c4fcc34
00:20:37.899     19:19:08 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 238
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:236
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fdd2fe00000
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7febbee00000
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7febbee00000
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:239
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:240
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:37.899  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:20:38.158  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:38.158  I0000 00:00:1733509148.998366  604506 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:38.158  I0000 00:00:1733509149.000416  604506 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:38.158  I0000 00:00:1733509149.002101  604509 subchannel.cc:806] subchannel 0x55dbdbbd1560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55dbdbbe7f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55dbdbb9e6e0, grpc.internal.client_channel_call_destination=0x7ff6d8827390, grpc.internal.event_engine=0x55dbdbbcd5b0, grpc.internal.security_connector=0x55dbdbb11d60, grpc.internal.subchannel_pool=0x55dbdbc21410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55dbdbaeba60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:09.001554489+01:00"}), backing off for 1000 ms
00:20:38.158  VHOST_CONFIG: (/var/tmp/sma-32) vhost-user server: socket created, fd: 243
00:20:38.158  VHOST_CONFIG: (/var/tmp/sma-32) binding succeeded
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) new vhost user connection is 241
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) new device, handle is 1
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Vhost-user protocol features: 0x11ebf
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_QUEUE_NUM
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_BACKEND_REQ_FD
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_OWNER
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:245
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:246
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:20:39.097  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_CONFIG
00:20:39.097   19:19:09 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # devids[1]=virtio_blk:sma-32
00:20:39.097    19:19:09 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:20:39.097    19:19:09 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:20:39.097    19:19:09 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:39.097    19:19:09 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:39.097    19:19:09 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:20:39.097    19:19:09 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:20:39.097     19:19:09 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:20:39.098     19:19:09 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:20:39.098     19:19:09 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:39.098     19:19:09 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:39.098     19:19:09 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:20:39.098     19:19:09 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:20:39.098    19:19:09 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000008):
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_INFLIGHT_FD
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd num_queues: 2
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd queue_size: 128
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_size: 4224
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_offset: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) send inflight fd: 242
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_INFLIGHT_FD
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_size: 4224
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_offset: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd num_queues: 2
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd queue_size: 128
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd fd: 247
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd pervq_inflight_size: 2112
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:242
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:245
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_MEM_TABLE
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) guest memory region size: 0x40000000
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest physical addr: 0x0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest virtual  addr: 0x7fdd2fe00000
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	 host  virtual  addr: 0x7feb7ee00000
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap addr : 0x7feb7ee00000
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap size : 0x40000000
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap align: 0x200000
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap off  : 0x0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:0 file:248
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:1 file:249
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 1
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x0000000f):
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 1
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 1
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 1
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:20:39.098  VHOST_CONFIG: (/var/tmp/sma-32) virtio is now ready for processing.
00:20:39.098  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:20:39.098   19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # [[ 2 -eq 2 ]]
00:20:39.098   19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:20:39.098   19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-0
00:20:39.098   19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:39.357  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:39.357  I0000 00:00:1733509150.270239  604666 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:39.357  I0000 00:00:1733509150.272231  604666 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:39.357  I0000 00:00:1733509150.273763  604671 subchannel.cc:806] subchannel 0x5617710d1560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5617710e7f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56177109e6e0, grpc.internal.client_channel_call_destination=0x7f3973012390, grpc.internal.event_engine=0x5617710cd5b0, grpc.internal.security_connector=0x5617710cd540, grpc.internal.subchannel_pool=0x561771121410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561770feba60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:10.273286849+01:00"}), backing off for 999 ms
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:49
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:39.357  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:1
00:20:39.615  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:20:39.615  {}
00:20:39.615   19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:20:39.615   19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-32
00:20:39.615   19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:39.875  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:39.875  I0000 00:00:1733509150.678645  604695 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:39.875  I0000 00:00:1733509150.680616  604695 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:39.875  I0000 00:00:1733509150.682103  604704 subchannel.cc:806] subchannel 0x5596f07a5560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5596f07bbf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5596f07726e0, grpc.internal.client_channel_call_destination=0x7f5226474390, grpc.internal.event_engine=0x5596f07a15b0, grpc.internal.security_connector=0x5596f07a1540, grpc.internal.subchannel_pool=0x5596f07f5410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5596f06bfa60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:10.681643141+01:00"}), backing off for 1000 ms
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000000):
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 1
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 0
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 0
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 1
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 file:49
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:20:39.875  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 file:1
00:20:40.135  VHOST_CONFIG: (/var/tmp/sma-32) vhost peer closed
00:20:40.135  {}
00:20:40.135    19:19:10 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:20:40.135    19:19:10 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:20:40.135    19:19:10 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:40.135    19:19:10 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:40.135    19:19:10 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:20:40.135    19:19:10 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:20:40.135     19:19:10 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:20:40.135     19:19:10 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:20:40.135     19:19:10 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:40.135     19:19:10 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:40.135     19:19:10 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:20:40.135     19:19:10 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:20:40.135    19:19:10 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:20:40.135  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:20:40.394   19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # [[ 0 -eq 0 ]]
00:20:40.394   19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@168 -- # key0=1234567890abcdef1234567890abcdef
00:20:40.394   19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@169 -- # rpc_cmd bdev_malloc_create -b malloc0 32 4096
00:20:40.394   19:19:11 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.394   19:19:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:40.394  malloc0
00:20:40.394   19:19:11 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.394    19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # rpc_cmd bdev_get_bdevs -b malloc0
00:20:40.394    19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # jq -r '.[].uuid'
00:20:40.394    19:19:11 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.394    19:19:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:40.394    19:19:11 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.394   19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # uuid=6dbd18c5-d558-4bef-885d-1d596bfe1052
00:20:40.394    19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:40.394    19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r .handle
00:20:40.394     19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # uuid2base64 6dbd18c5-d558-4bef-885d-1d596bfe1052
00:20:40.394     19:19:11 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:40.394     19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # get_cipher AES_CBC
00:20:40.394     19:19:11 sma.sma_vhost -- sma/common.sh@27 -- # case "$1" in
00:20:40.394     19:19:11 sma.sma_vhost -- sma/common.sh@28 -- # echo 0
00:20:40.394     19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # format_key 1234567890abcdef1234567890abcdef
00:20:40.394     19:19:11 sma.sma_vhost -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:20:40.394      19:19:11 sma.sma_vhost -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:40.653  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:40.653  I0000 00:00:1733509151.519639  604851 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:40.653  I0000 00:00:1733509151.521475  604851 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:40.653  I0000 00:00:1733509151.523189  604867 subchannel.cc:806] subchannel 0x5579b613c560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5579b6152f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5579b61096e0, grpc.internal.client_channel_call_destination=0x7f3d43b21390, grpc.internal.event_engine=0x5579b61385b0, grpc.internal.security_connector=0x5579b6138540, grpc.internal.subchannel_pool=0x5579b618c410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5579b6056a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:11.522665252+01:00"}), backing off for 1000 ms
00:20:40.653  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252
00:20:40.653  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 60
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:254
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:255
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:20:41.223   19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # devid0=virtio_blk:sma-0
00:20:41.223    19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # rpc_cmd vhost_get_controllers
00:20:41.223    19:19:11 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # jq -r '. | length'
00:20:41.223    19:19:11 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.223    19:19:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:41.223    19:19:11 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.223   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # [[ 1 -eq 1 ]]
00:20:41.223    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # rpc_cmd vhost_get_controllers
00:20:41.223    19:19:12 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.223    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # jq -r '.[].backend_specific.block.bdev'
00:20:41.223    19:19:12 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 59
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 256
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:59
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:254
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fdd2fe00000
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7febbee00000
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7febbee00000
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:258
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:259
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:41.223  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:20:41.223    19:19:12 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.223   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # bdev=fea309dd-5707-4530-bdac-d43f4c702056
00:20:41.223    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # rpc_cmd bdev_get_bdevs
00:20:41.223    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # jq -r '.[] | select(.product_name == "crypto")'
00:20:41.223    19:19:12 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.223    19:19:12 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:41.223    19:19:12 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.223   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # crypto_bdev='{
00:20:41.223    "name": "fea309dd-5707-4530-bdac-d43f4c702056",
00:20:41.223    "aliases": [
00:20:41.224      "5188e65c-1e37-5427-87cc-fb5f30a30dc5"
00:20:41.224    ],
00:20:41.224    "product_name": "crypto",
00:20:41.224    "block_size": 4096,
00:20:41.224    "num_blocks": 8192,
00:20:41.224    "uuid": "5188e65c-1e37-5427-87cc-fb5f30a30dc5",
00:20:41.224    "assigned_rate_limits": {
00:20:41.224      "rw_ios_per_sec": 0,
00:20:41.224      "rw_mbytes_per_sec": 0,
00:20:41.224      "r_mbytes_per_sec": 0,
00:20:41.224      "w_mbytes_per_sec": 0
00:20:41.224    },
00:20:41.224    "claimed": false,
00:20:41.224    "zoned": false,
00:20:41.224    "supported_io_types": {
00:20:41.224      "read": true,
00:20:41.224      "write": true,
00:20:41.224      "unmap": true,
00:20:41.224      "flush": true,
00:20:41.224      "reset": true,
00:20:41.224      "nvme_admin": false,
00:20:41.224      "nvme_io": false,
00:20:41.224      "nvme_io_md": false,
00:20:41.224      "write_zeroes": true,
00:20:41.224      "zcopy": false,
00:20:41.224      "get_zone_info": false,
00:20:41.224      "zone_management": false,
00:20:41.224      "zone_append": false,
00:20:41.224      "compare": false,
00:20:41.224      "compare_and_write": false,
00:20:41.224      "abort": false,
00:20:41.224      "seek_hole": false,
00:20:41.224      "seek_data": false,
00:20:41.224      "copy": false,
00:20:41.224      "nvme_iov_md": false
00:20:41.224    },
00:20:41.224    "memory_domains": [
00:20:41.224      {
00:20:41.224        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:20:41.224        "dma_device_type": 2
00:20:41.224      }
00:20:41.224    ],
00:20:41.224    "driver_specific": {
00:20:41.224      "crypto": {
00:20:41.224        "base_bdev_name": "malloc0",
00:20:41.224        "name": "fea309dd-5707-4530-bdac-d43f4c702056",
00:20:41.224        "key_name": "fea309dd-5707-4530-bdac-d43f4c702056_AES_CBC"
00:20:41.224      }
00:20:41.224    }
00:20:41.224  }'
00:20:41.224    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # jq -r .driver_specific.crypto.name
00:20:41.224   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # [[ fea309dd-5707-4530-bdac-d43f4c702056 == \f\e\a\3\0\9\d\d\-\5\7\0\7\-\4\5\3\0\-\b\d\a\c\-\d\4\3\f\4\c\7\0\2\0\5\6 ]]
00:20:41.224    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # jq -r .driver_specific.crypto.key_name
00:20:41.482   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # key_name=fea309dd-5707-4530-bdac-d43f4c702056_AES_CBC
00:20:41.482    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # rpc_cmd accel_crypto_keys_get -k fea309dd-5707-4530-bdac-d43f4c702056_AES_CBC
00:20:41.482    19:19:12 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.482    19:19:12 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:41.482    19:19:12 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.482   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # key_obj='[
00:20:41.482  {
00:20:41.482  "name": "fea309dd-5707-4530-bdac-d43f4c702056_AES_CBC",
00:20:41.482  "cipher": "AES_CBC",
00:20:41.482  "key": "1234567890abcdef1234567890abcdef"
00:20:41.482  }
00:20:41.482  ]'
00:20:41.482    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # jq -r '.[0].key'
00:20:41.482   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:20:41.482    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # jq -r '.[0].cipher'
00:20:41.482   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:20:41.482   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@205 -- # delete_device virtio_blk:sma-0
00:20:41.482   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:41.740  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:41.741  I0000 00:00:1733509152.494478  605033 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:41.741  I0000 00:00:1733509152.496189  605033 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:41.741  I0000 00:00:1733509152.497630  605040 subchannel.cc:806] subchannel 0x5649ac077560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5649ac08df20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5649ac0446e0, grpc.internal.client_channel_call_destination=0x7f2b38380390, grpc.internal.event_engine=0x5649ac0735b0, grpc.internal.security_connector=0x5649ac073540, grpc.internal.subchannel_pool=0x5649ac0c7410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5649abf91a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:12.497039787+01:00"}), backing off for 999 ms
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:36
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:0
00:20:41.741  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:20:41.741  {}
00:20:41.741    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # rpc_cmd bdev_get_bdevs
00:20:41.741    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r '.[] | select(.product_name == "crypto")'
00:20:41.741    19:19:12 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.741    19:19:12 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:41.741    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r length
00:20:41.741    19:19:12 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:42.000   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # [[ '' -eq 0 ]]
00:20:42.001   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@209 -- # device_vhost=2
00:20:42.001    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # rpc_cmd bdev_get_bdevs -b null0
00:20:42.001    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r '.[].uuid'
00:20:42.001    19:19:12 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:42.001    19:19:12 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:42.001    19:19:12 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:42.001   19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # uuid=946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:42.001    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # create_device 0 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:42.001    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:42.001    19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # jq -r .handle
00:20:42.001     19:19:12 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:42.001     19:19:12 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:42.258  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:42.259  I0000 00:00:1733509153.016506  605074 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:42.259  I0000 00:00:1733509153.018348  605074 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:42.259  I0000 00:00:1733509153.019988  605078 subchannel.cc:806] subchannel 0x55d8b8d12560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d8b8d28f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d8b8cdf6e0, grpc.internal.client_channel_call_destination=0x7f3754022390, grpc.internal.event_engine=0x55d8b8d0e5b0, grpc.internal.security_connector=0x55d8b8d0e540, grpc.internal.subchannel_pool=0x55d8b8d62410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d8b8c2ca60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:13.019430785+01:00"}), backing off for 1000 ms
00:20:42.259  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252
00:20:42.259  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 58
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:254
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:255
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:20:42.825   19:19:13 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # device=virtio_blk:sma-0
00:20:42.825   19:19:13 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # diff /dev/fd/62 /dev/fd/61
00:20:42.825    19:19:13 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:20:42.825    19:19:13 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # get_qos_caps 2
00:20:42.825    19:19:13 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:20:42.825    19:19:13 sma.sma_vhost -- sma/common.sh@45 -- # local rootdir
00:20:42.825     19:19:13 sma.sma_vhost -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:20:42.825    19:19:13 sma.sma_vhost -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:20:42.825    19:19:13 sma.sma_vhost -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 60
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 256
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:60
00:20:42.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:254
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fdd2fe00000
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7feb7ec00000
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7feb7ec00000
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:257
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:258
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:42.826  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:20:43.083  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:43.083  I0000 00:00:1733509153.877978  605234 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:43.083  I0000 00:00:1733509153.879807  605234 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:43.083  I0000 00:00:1733509153.881416  605240 subchannel.cc:806] subchannel 0x55adf7bf84e0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55adf7b76640, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55adf7a70020, grpc.internal.client_channel_call_destination=0x7f80934fd390, grpc.internal.event_engine=0x55adf7a26c90, grpc.internal.security_connector=0x55adf7b29480, grpc.internal.subchannel_pool=0x55adf7b292e0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55adf7a414b0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:13.880892831+01:00"}), backing off for 1000 ms
00:20:43.083   19:19:13 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.083    19:19:13 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # uuid2base64 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:43.083    19:19:13 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:43.341  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:43.341  I0000 00:00:1733509154.191484  605260 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:43.341  I0000 00:00:1733509154.193272  605260 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:43.341  I0000 00:00:1733509154.194753  605264 subchannel.cc:806] subchannel 0x55feebb83560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55feebb99f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55feebb506e0, grpc.internal.client_channel_call_destination=0x7fd6f94c8390, grpc.internal.event_engine=0x55feebb7f5b0, grpc.internal.security_connector=0x55feebb03fb0, grpc.internal.subchannel_pool=0x55feebbd3410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55feeba9da60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:14.194289437+01:00"}), backing off for 999 ms
00:20:43.341  {}
00:20:43.341   19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # diff /dev/fd/62 /dev/fd/61
00:20:43.341    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys
00:20:43.341    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # rpc_cmd bdev_get_bdevs -b 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:43.341    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys '.[].assigned_rate_limits'
00:20:43.341    19:19:14 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:43.341    19:19:14 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:43.341    19:19:14 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:43.599   19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@264 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.599  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:43.599  I0000 00:00:1733509154.538877  605306 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:43.599  I0000 00:00:1733509154.540784  605306 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:43.600  I0000 00:00:1733509154.542376  605415 subchannel.cc:806] subchannel 0x557281237560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55728124df20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5572812046e0, grpc.internal.client_channel_call_destination=0x7fc6569fe390, grpc.internal.event_engine=0x5572812335b0, grpc.internal.security_connector=0x557281177d60, grpc.internal.subchannel_pool=0x557281287410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557281151a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:14.541860873+01:00"}), backing off for 1000 ms
00:20:43.858  {}
00:20:43.858   19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # diff /dev/fd/62 /dev/fd/61
00:20:43.858    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys
00:20:43.858    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # rpc_cmd bdev_get_bdevs -b 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:43.858    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys '.[].assigned_rate_limits'
00:20:43.858    19:19:14 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:43.858    19:19:14 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:43.858    19:19:14 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:43.858   19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.858     19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuidgen
00:20:43.858    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuid2base64 6074f4af-8296-41ea-b1d8-80ccb52a2b0e
00:20:43.858    19:19:14 sma.sma_vhost -- sma/common.sh@20 -- # python
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:43.858    19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:43.858    19:19:14 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:20:43.858   19:19:14 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.117  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:44.117  I0000 00:00:1733509154.912772  605447 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:44.117  I0000 00:00:1733509154.914708  605447 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:44.117  I0000 00:00:1733509154.916389  605452 subchannel.cc:806] subchannel 0x55d0d2590560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d0d25a6f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d0d255d6e0, grpc.internal.client_channel_call_destination=0x7fc0030fb390, grpc.internal.event_engine=0x55d0d258c5b0, grpc.internal.security_connector=0x55d0d2510fb0, grpc.internal.subchannel_pool=0x55d0d25e0410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d0d24aaa60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:14.915890357+01:00"}), backing off for 1000 ms
00:20:44.117  [2024-12-06 19:19:14.952656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6074f4af-8296-41ea-b1d8-80ccb52a2b0e
00:20:44.117  Traceback (most recent call last):
00:20:44.117    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:44.117      main(sys.argv[1:])
00:20:44.117    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:44.117      result = client.call(request['method'], request.get('params', {}))
00:20:44.117               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.117    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:44.117      response = func(request=json_format.ParseDict(params, input()))
00:20:44.117                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.117    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:44.117      return _end_unary_response_blocking(state, call, False, None)
00:20:44.117             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.117    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:44.117      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:44.117      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.117  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:44.117  	status = StatusCode.INVALID_ARGUMENT
00:20:44.117  	details = "Specified volume is not attached to the device"
00:20:44.117  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Specified volume is not attached to the device", grpc_status:3, created_time:"2024-12-06T19:19:14.956993055+01:00"}"
00:20:44.117  >
00:20:44.117   19:19:14 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:20:44.117   19:19:14 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:44.117   19:19:14 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:44.117   19:19:14 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:44.118   19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.118    19:19:14 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # base64
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:44.118    19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:44.118    19:19:14 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:20:44.118   19:19:14 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.376  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:44.376  I0000 00:00:1733509155.224570  605478 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:44.376  I0000 00:00:1733509155.226225  605478 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:44.376  I0000 00:00:1733509155.227732  605484 subchannel.cc:806] subchannel 0x55a190436560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a19044cf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a1904036e0, grpc.internal.client_channel_call_destination=0x7f7a98470390, grpc.internal.event_engine=0x55a1904325b0, grpc.internal.security_connector=0x55a190376d60, grpc.internal.subchannel_pool=0x55a190486410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a190350a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:15.227259084+01:00"}), backing off for 999 ms
00:20:44.376  Traceback (most recent call last):
00:20:44.376    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:44.376      main(sys.argv[1:])
00:20:44.376    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:44.376      result = client.call(request['method'], request.get('params', {}))
00:20:44.376               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.376    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:44.376      response = func(request=json_format.ParseDict(params, input()))
00:20:44.376                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.376    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:44.376      return _end_unary_response_blocking(state, call, False, None)
00:20:44.376             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.376    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:44.376      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:44.376      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:44.376  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:44.376  	status = StatusCode.INVALID_ARGUMENT
00:20:44.376  	details = "Invalid volume uuid"
00:20:44.376  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:15.233900336+01:00", grpc_status:3, grpc_message:"Invalid volume uuid"}"
00:20:44.376  >
00:20:44.376   19:19:15 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:20:44.376   19:19:15 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:44.376   19:19:15 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:44.376   19:19:15 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:44.376   19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # diff /dev/fd/62 /dev/fd/61
00:20:44.376    19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys
00:20:44.376    19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # rpc_cmd bdev_get_bdevs -b 946aa955-ac3a-49f3-9f79-b65640ac1d88
00:20:44.376    19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys '.[].assigned_rate_limits'
00:20:44.376    19:19:15 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:44.376    19:19:15 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:44.376    19:19:15 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:44.376   19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@344 -- # delete_device virtio_blk:sma-0
00:20:44.376   19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:44.636  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:44.636  I0000 00:00:1733509155.543185  605511 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:44.636  I0000 00:00:1733509155.544953  605511 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:44.636  I0000 00:00:1733509155.546466  605636 subchannel.cc:806] subchannel 0x5650b8bec560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5650b8c02f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5650b8bb96e0, grpc.internal.client_channel_call_destination=0x7fc848f8d390, grpc.internal.event_engine=0x5650b8be85b0, grpc.internal.security_connector=0x5650b8be8540, grpc.internal.subchannel_pool=0x5650b8c3c410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5650b8b06a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:15.545948518+01:00"}), backing off for 1000 ms
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:20:44.896  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:28
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:22
00:20:44.897  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:20:44.897  {}
00:20:44.897   19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@346 -- # cleanup
00:20:44.897   19:19:15 sma.sma_vhost -- sma/vhost_blk.sh@14 -- # killprocess 602974
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 602974 ']'
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 602974
00:20:44.897    19:19:15 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:44.897    19:19:15 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602974
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602974'
00:20:44.897  killing process with pid 602974
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 602974
00:20:44.897   19:19:15 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 602974
00:20:45.833   19:19:16 sma.sma_vhost -- sma/vhost_blk.sh@15 -- # killprocess 603117
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 603117 ']'
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 603117
00:20:45.833    19:19:16 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:45.833    19:19:16 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603117
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=python3
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603117'
00:20:45.833  killing process with pid 603117
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 603117
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 603117
00:20:45.833   19:19:16 sma.sma_vhost -- sma/vhost_blk.sh@16 -- # vm_kill_all
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@476 -- # local vm
00:20:45.833    19:19:16 sma.sma_vhost -- vhost/common.sh@477 -- # vm_list_all
00:20:45.833    19:19:16 sma.sma_vhost -- vhost/common.sh@466 -- # vms=()
00:20:45.833    19:19:16 sma.sma_vhost -- vhost/common.sh@466 -- # local vms
00:20:45.833    19:19:16 sma.sma_vhost -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:20:45.833    19:19:16 sma.sma_vhost -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:20:45.833    19:19:16 sma.sma_vhost -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@478 -- # vm_kill 0
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@449 -- # local vm_pid
00:20:45.833    19:19:16 sma.sma_vhost -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@450 -- # vm_pid=600232
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=600232)'
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=600232)'
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=600232)'
00:20:45.833  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=600232)
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@454 -- # /bin/kill 600232
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@455 -- # notice 'process 600232 killed'
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'process 600232 killed'
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: process 600232 killed'
00:20:45.833  INFO: process 600232 killed
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:20:45.833   19:19:16 sma.sma_vhost -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:20:45.833   19:19:16 sma.sma_vhost -- sma/vhost_blk.sh@347 -- # trap - SIGINT SIGTERM EXIT
00:20:45.833  
00:20:45.833  real	0m41.999s
00:20:45.833  user	0m42.850s
00:20:45.833  sys	0m2.815s
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:45.833   19:19:16 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:20:45.833  ************************************
00:20:45.833  END TEST sma_vhost
00:20:45.833  ************************************
00:20:45.833   19:19:16 sma -- sma/sma.sh@16 -- # run_test sma_crypto /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:20:45.833   19:19:16 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:45.833   19:19:16 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:45.833   19:19:16 sma -- common/autotest_common.sh@10 -- # set +x
00:20:46.091  ************************************
00:20:46.091  START TEST sma_crypto
00:20:46.091  ************************************
00:20:46.091   19:19:16 sma.sma_crypto -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:20:46.091  * Looking for test storage...
00:20:46.091  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:20:46.091    19:19:16 sma.sma_crypto -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:46.091     19:19:16 sma.sma_crypto -- common/autotest_common.sh@1711 -- # lcov --version
00:20:46.091     19:19:16 sma.sma_crypto -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:46.091    19:19:16 sma.sma_crypto -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@336 -- # IFS=.-:
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@336 -- # read -ra ver1
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@337 -- # IFS=.-:
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@337 -- # read -ra ver2
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@338 -- # local 'op=<'
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@340 -- # ver1_l=2
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@341 -- # ver2_l=1
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@344 -- # case "$op" in
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@345 -- # : 1
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@365 -- # decimal 1
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@353 -- # local d=1
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@355 -- # echo 1
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@365 -- # ver1[v]=1
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@366 -- # decimal 2
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@353 -- # local d=2
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:46.091     19:19:16 sma.sma_crypto -- scripts/common.sh@355 -- # echo 2
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@366 -- # ver2[v]=2
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:46.091    19:19:16 sma.sma_crypto -- scripts/common.sh@368 -- # return 0
00:20:46.091    19:19:16 sma.sma_crypto -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:46.091    19:19:16 sma.sma_crypto -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:46.091  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:46.091  		--rc genhtml_branch_coverage=1
00:20:46.091  		--rc genhtml_function_coverage=1
00:20:46.091  		--rc genhtml_legend=1
00:20:46.091  		--rc geninfo_all_blocks=1
00:20:46.091  		--rc geninfo_unexecuted_blocks=1
00:20:46.091  		
00:20:46.091  		'
00:20:46.091    19:19:16 sma.sma_crypto -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:46.091  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:46.091  		--rc genhtml_branch_coverage=1
00:20:46.091  		--rc genhtml_function_coverage=1
00:20:46.091  		--rc genhtml_legend=1
00:20:46.091  		--rc geninfo_all_blocks=1
00:20:46.092  		--rc geninfo_unexecuted_blocks=1
00:20:46.092  		
00:20:46.092  		'
00:20:46.092    19:19:16 sma.sma_crypto -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:46.092  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:46.092  		--rc genhtml_branch_coverage=1
00:20:46.092  		--rc genhtml_function_coverage=1
00:20:46.092  		--rc genhtml_legend=1
00:20:46.092  		--rc geninfo_all_blocks=1
00:20:46.092  		--rc geninfo_unexecuted_blocks=1
00:20:46.092  		
00:20:46.092  		'
00:20:46.092    19:19:16 sma.sma_crypto -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:46.092  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:46.092  		--rc genhtml_branch_coverage=1
00:20:46.092  		--rc genhtml_function_coverage=1
00:20:46.092  		--rc genhtml_legend=1
00:20:46.092  		--rc geninfo_all_blocks=1
00:20:46.092  		--rc geninfo_unexecuted_blocks=1
00:20:46.092  		
00:20:46.092  		'
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@14 -- # localnqn=nqn.2016-06.io.spdk:cnode0
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@15 -- # tgtnqn=nqn.2016-06.io.spdk:tgt0
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@16 -- # key0=1234567890abcdef1234567890abcdef
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@17 -- # key1=deadbeefcafebabefeedbeefbabecafe
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@18 -- # tgtsock=/var/tmp/spdk.sock2
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@19 -- # discovery_port=8009
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@145 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@148 -- # hostpid=605860
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --wait-for-rpc
00:20:46.092   19:19:16 sma.sma_crypto -- sma/crypto.sh@150 -- # waitforlisten 605860
00:20:46.092   19:19:16 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 605860 ']'
00:20:46.092   19:19:16 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:46.092   19:19:16 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:46.092   19:19:16 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:46.092  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:46.092   19:19:16 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:46.092   19:19:16 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:46.350  [2024-12-06 19:19:17.054037] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:20:46.350  [2024-12-06 19:19:17.054182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605860 ]
00:20:46.350  EAL: No free 2048 kB hugepages reported on node 1
00:20:46.350  [2024-12-06 19:19:17.184900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:46.610  [2024-12-06 19:19:17.305363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:47.178   19:19:18 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:47.178   19:19:18 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:20:47.178   19:19:18 sma.sma_crypto -- sma/crypto.sh@153 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py dpdk_cryptodev_scan_accel_module
00:20:47.436   19:19:18 sma.sma_crypto -- sma/crypto.sh@154 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:20:47.436   19:19:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:47.436   19:19:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:47.436  [2024-12-06 19:19:18.296701] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:20:47.436   19:19:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:47.436   19:19:18 sma.sma_crypto -- sma/crypto.sh@155 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o encrypt -m dpdk_cryptodev
00:20:47.696  [2024-12-06 19:19:18.557451] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:20:47.696   19:19:18 sma.sma_crypto -- sma/crypto.sh@156 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o decrypt -m dpdk_cryptodev
00:20:47.955  [2024-12-06 19:19:18.830194] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:20:47.955   19:19:18 sma.sma_crypto -- sma/crypto.sh@157 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:20:48.526  [2024-12-06 19:19:19.341290] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:20:49.117   19:19:19 sma.sma_crypto -- sma/crypto.sh@160 -- # tgtpid=606174
00:20:49.117   19:19:19 sma.sma_crypto -- sma/crypto.sh@159 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:20:49.117   19:19:19 sma.sma_crypto -- sma/crypto.sh@172 -- # smapid=606176
00:20:49.117   19:19:19 sma.sma_crypto -- sma/crypto.sh@175 -- # sma_waitforlisten
00:20:49.117   19:19:19 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:20:49.117   19:19:19 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:20:49.117   19:19:19 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:20:49.117   19:19:19 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:20:49.117   19:19:19 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:20:49.117   19:19:19 sma.sma_crypto -- sma/crypto.sh@162 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:20:49.117    19:19:19 sma.sma_crypto -- sma/crypto.sh@162 -- # cat
00:20:49.117   19:19:20 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:20:49.378  [2024-12-06 19:19:20.108409] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:20:49.378  [2024-12-06 19:19:20.108573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606174 ]
00:20:49.378  EAL: No free 2048 kB hugepages reported on node 1
00:20:49.378  [2024-12-06 19:19:20.244838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:49.378  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:49.378  I0000 00:00:1733509160.274571  606176 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:49.378  [2024-12-06 19:19:20.288623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:49.638  [2024-12-06 19:19:20.369229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:50.208   19:19:21 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:20:50.208   19:19:21 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:20:50.208   19:19:21 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:20:50.208   19:19:21 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:20:50.208    19:19:21 sma.sma_crypto -- sma/crypto.sh@178 -- # uuidgen
00:20:50.208   19:19:21 sma.sma_crypto -- sma/crypto.sh@178 -- # uuid=8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:50.208   19:19:21 sma.sma_crypto -- sma/crypto.sh@179 -- # waitforlisten 606174 /var/tmp/spdk.sock2
00:20:50.208   19:19:21 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 606174 ']'
00:20:50.208   19:19:21 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:20:50.208   19:19:21 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:50.208   19:19:21 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:20:50.208  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:20:50.208   19:19:21 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:50.208   19:19:21 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:50.467   19:19:21 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:50.467   19:19:21 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:20:50.467   19:19:21 sma.sma_crypto -- sma/crypto.sh@180 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:20:51.038  [2024-12-06 19:19:21.690522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:51.038  [2024-12-06 19:19:21.707150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:20:51.038  [2024-12-06 19:19:21.714759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:20:51.038  malloc0
00:20:51.038    19:19:21 sma.sma_crypto -- sma/crypto.sh@190 -- # create_device
00:20:51.038    19:19:21 sma.sma_crypto -- sma/crypto.sh@190 -- # jq -r .handle
00:20:51.038    19:19:21 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:51.038  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:51.038  I0000 00:00:1733509161.981001  606447 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:51.038  I0000 00:00:1733509161.982993  606447 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:51.038  I0000 00:00:1733509161.984619  606453 subchannel.cc:806] subchannel 0x5627cbb18560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5627cbb2ef20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5627cbae56e0, grpc.internal.client_channel_call_destination=0x7fa9645c8390, grpc.internal.event_engine=0x5627cbb145b0, grpc.internal.security_connector=0x5627cba98fb0, grpc.internal.subchannel_pool=0x5627cbb68410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5627cba32a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:21.984118013+01:00"}), backing off for 999 ms
00:20:51.297  [2024-12-06 19:19:22.006190] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:20:51.297   19:19:22 sma.sma_crypto -- sma/crypto.sh@190 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:51.297   19:19:22 sma.sma_crypto -- sma/crypto.sh@193 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:51.297   19:19:22 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:51.297   19:19:22 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:51.297   19:19:22 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher= key= key2= config
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:51.297     19:19:22 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:51.297      19:19:22 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:51.297      19:19:22 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:51.297  "nvmf": {
00:20:51.297    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:51.297    "discovery": {
00:20:51.297      "discovery_endpoints": [
00:20:51.297        {
00:20:51.297          "trtype": "tcp",
00:20:51.297          "traddr": "127.0.0.1",
00:20:51.297          "trsvcid": "8009"
00:20:51.297        }
00:20:51.297      ]
00:20:51.297    }
00:20:51.297  }'
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n '' ]]
00:20:51.297    19:19:22 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:51.557  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:51.558  I0000 00:00:1733509162.349596  606474 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:51.558  I0000 00:00:1733509162.351406  606474 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:51.558  I0000 00:00:1733509162.353079  606599 subchannel.cc:806] subchannel 0x562968ef1560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562968f07f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562968ebe6e0, grpc.internal.client_channel_call_destination=0x7faae1889390, grpc.internal.event_engine=0x562968eed5b0, grpc.internal.security_connector=0x562968eed540, grpc.internal.subchannel_pool=0x562968f41410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562968e0ba60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:22.352592658+01:00"}), backing off for 1000 ms
00:20:52.937  {}
00:20:52.937    19:19:23 sma.sma_crypto -- sma/crypto.sh@195 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:52.937    19:19:23 sma.sma_crypto -- sma/crypto.sh@195 -- # jq -r '.[0].namespaces[0].name'
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.938   19:19:23 sma.sma_crypto -- sma/crypto.sh@195 -- # ns_bdev=39b9b3e4-1605-44ba-aa37-adbda2cbb3f00n1
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@196 -- # rpc_cmd bdev_get_bdevs -b 39b9b3e4-1605-44ba-aa37-adbda2cbb3f00n1
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@196 -- # jq -r '.[0].product_name'
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.938   19:19:23 sma.sma_crypto -- sma/crypto.sh@196 -- # [[ NVMe disk == \N\V\M\e\ \d\i\s\k ]]
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@197 -- # rpc_cmd bdev_get_bdevs
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@197 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.938   19:19:23 sma.sma_crypto -- sma/crypto.sh@197 -- # [[ 0 -eq 0 ]]
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@198 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@198 -- # jq -r '.[0].namespaces[0].uuid'
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.938   19:19:23 sma.sma_crypto -- sma/crypto.sh@198 -- # [[ 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 == \8\f\f\b\3\c\5\c\-\7\b\4\d\-\4\5\4\d\-\a\6\1\0\-\2\4\2\a\2\9\a\9\4\a\d\2 ]]
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@199 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@199 -- # jq -r '.[0].namespaces[0].nguid'
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:52.938    19:19:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@199 -- # uuid2nguid 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:52.938    19:19:23 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=8FFB3C5C-7B4D-454D-A610-242A29A94AD2
00:20:52.938    19:19:23 sma.sma_crypto -- sma/common.sh@41 -- # echo 8FFB3C5C7B4D454DA610242A29A94AD2
00:20:52.938   19:19:23 sma.sma_crypto -- sma/crypto.sh@199 -- # [[ 8FFB3C5C7B4D454DA610242A29A94AD2 == \8\F\F\B\3\C\5\C\7\B\4\D\4\5\4\D\A\6\1\0\2\4\2\A\2\9\A\9\4\A\D\2 ]]
00:20:52.938   19:19:23 sma.sma_crypto -- sma/crypto.sh@201 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:52.938   19:19:23 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:52.938    19:19:23 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:52.938    19:19:23 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:53.196  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:53.196  I0000 00:00:1733509164.056988  606776 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:53.196  I0000 00:00:1733509164.058836  606776 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:53.196  I0000 00:00:1733509164.060501  606779 subchannel.cc:806] subchannel 0x55ef65d06560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ef65d1cf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ef65cd36e0, grpc.internal.client_channel_call_destination=0x7f1194092390, grpc.internal.event_engine=0x55ef65d025b0, grpc.internal.security_connector=0x55ef65c86fb0, grpc.internal.subchannel_pool=0x55ef65d56410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ef65c20a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:24.059930769+01:00"}), backing off for 1000 ms
00:20:53.196  {}
00:20:53.197   19:19:24 sma.sma_crypto -- sma/crypto.sh@204 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:20:53.197   19:19:24 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:53.197   19:19:24 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:53.197   19:19:24 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:53.197    19:19:24 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:20:53.197    19:19:24 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:20:53.197    19:19:24 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:53.197     19:19:24 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:53.455      19:19:24 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:53.455      19:19:24 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:53.455  "nvmf": {
00:20:53.455    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:53.455    "discovery": {
00:20:53.455      "discovery_endpoints": [
00:20:53.455        {
00:20:53.455          "trtype": "tcp",
00:20:53.455          "traddr": "127.0.0.1",
00:20:53.455          "trsvcid": "8009"
00:20:53.455        }
00:20:53.455      ]
00:20:53.455    }
00:20:53.455  }'
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:53.455     19:19:24 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:20:53.455     19:19:24 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:53.455     19:19:24 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:53.455     19:19:24 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:20:53.455     19:19:24 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:53.455      19:19:24 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:20:53.455     19:19:24 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:53.455    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:20:53.455  }'
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:53.455    19:19:24 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:53.714  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:53.714  I0000 00:00:1733509164.435457  606805 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:53.714  I0000 00:00:1733509164.437238  606805 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:53.714  I0000 00:00:1733509164.438826  606823 subchannel.cc:806] subchannel 0x5653db6e1560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5653db6f7f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5653db6ae6e0, grpc.internal.client_channel_call_destination=0x7f4651441390, grpc.internal.event_engine=0x5653db6dd5b0, grpc.internal.security_connector=0x5653db6dd540, grpc.internal.subchannel_pool=0x5653db731410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5653db5fba60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:24.438325938+01:00"}), backing off for 1000 ms
00:20:54.655  {}
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@206 -- # rpc_cmd bdev_nvme_get_discovery_info
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@206 -- # jq -r '. | length'
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@206 -- # [[ 1 -eq 1 ]]
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@207 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@207 -- # jq -r '.[0].namespaces | length'
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@207 -- # [[ 1 -eq 1 ]]
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@209 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 ns ns_bdev
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:20:54.913    "nsid": 1,
00:20:54.913    "bdev_name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:54.913    "name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:54.913    "nguid": "8FFB3C5C7B4D454DA610242A29A94AD2",
00:20:54.913    "uuid": "8ffb3c5c-7b4d-454d-a610-242a29a94ad2"
00:20:54.913  }'
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=62a292b5-5519-4502-8fda-12735ba8c6f8
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 62a292b5-5519-4502-8fda-12735ba8c6f8
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:54.913    19:19:25 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.913   19:19:25 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:20:54.913    19:19:25 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:20:55.171   19:19:25 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 == \8\f\f\b\3\c\5\c\-\7\b\4\d\-\4\5\4\d\-\a\6\1\0\-\2\4\2\a\2\9\a\9\4\a\d\2 ]]
00:20:55.171    19:19:25 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:20:55.171    19:19:25 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:55.171    19:19:25 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=8FFB3C5C-7B4D-454D-A610-242A29A94AD2
00:20:55.171    19:19:25 sma.sma_crypto -- sma/common.sh@41 -- # echo 8FFB3C5C7B4D454DA610242A29A94AD2
00:20:55.171   19:19:25 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 8FFB3C5C7B4D454DA610242A29A94AD2 == \8\F\F\B\3\C\5\C\7\B\4\D\4\5\4\D\A\6\1\0\2\4\2\A\2\9\A\9\4\A\D\2 ]]
00:20:55.171    19:19:25 sma.sma_crypto -- sma/crypto.sh@211 -- # rpc_cmd bdev_get_bdevs
00:20:55.171    19:19:25 sma.sma_crypto -- sma/crypto.sh@211 -- # jq -r '.[] | select(.product_name == "crypto")'
00:20:55.171    19:19:25 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.171    19:19:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.171    19:19:25 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.171   19:19:25 sma.sma_crypto -- sma/crypto.sh@211 -- # crypto_bdev='{
00:20:55.171    "name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:55.171    "aliases": [
00:20:55.171      "06c867f3-0509-5b2f-b4a5-e77f727eb0c4"
00:20:55.171    ],
00:20:55.171    "product_name": "crypto",
00:20:55.171    "block_size": 4096,
00:20:55.171    "num_blocks": 8192,
00:20:55.171    "uuid": "06c867f3-0509-5b2f-b4a5-e77f727eb0c4",
00:20:55.171    "assigned_rate_limits": {
00:20:55.171      "rw_ios_per_sec": 0,
00:20:55.171      "rw_mbytes_per_sec": 0,
00:20:55.171      "r_mbytes_per_sec": 0,
00:20:55.171      "w_mbytes_per_sec": 0
00:20:55.171    },
00:20:55.171    "claimed": true,
00:20:55.171    "claim_type": "exclusive_write",
00:20:55.171    "zoned": false,
00:20:55.171    "supported_io_types": {
00:20:55.171      "read": true,
00:20:55.172      "write": true,
00:20:55.172      "unmap": true,
00:20:55.172      "flush": true,
00:20:55.172      "reset": true,
00:20:55.172      "nvme_admin": false,
00:20:55.172      "nvme_io": false,
00:20:55.172      "nvme_io_md": false,
00:20:55.172      "write_zeroes": true,
00:20:55.172      "zcopy": false,
00:20:55.172      "get_zone_info": false,
00:20:55.172      "zone_management": false,
00:20:55.172      "zone_append": false,
00:20:55.172      "compare": false,
00:20:55.172      "compare_and_write": false,
00:20:55.172      "abort": false,
00:20:55.172      "seek_hole": false,
00:20:55.172      "seek_data": false,
00:20:55.172      "copy": false,
00:20:55.172      "nvme_iov_md": false
00:20:55.172    },
00:20:55.172    "memory_domains": [
00:20:55.172      {
00:20:55.172        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:20:55.172        "dma_device_type": 2
00:20:55.172      }
00:20:55.172    ],
00:20:55.172    "driver_specific": {
00:20:55.172      "crypto": {
00:20:55.172        "base_bdev_name": "6596f4be-470a-477f-946c-8a1d13d72cac0n1",
00:20:55.172        "name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:55.172        "key_name": "62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC"
00:20:55.172      }
00:20:55.172    }
00:20:55.172  }'
00:20:55.172    19:19:25 sma.sma_crypto -- sma/crypto.sh@212 -- # jq -r .driver_specific.crypto.key_name
00:20:55.172   19:19:25 sma.sma_crypto -- sma/crypto.sh@212 -- # key_name=62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC
00:20:55.172    19:19:25 sma.sma_crypto -- sma/crypto.sh@213 -- # rpc_cmd accel_crypto_keys_get -k 62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC
00:20:55.172    19:19:25 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.172    19:19:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.172    19:19:25 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.172   19:19:25 sma.sma_crypto -- sma/crypto.sh@213 -- # key_obj='[
00:20:55.172  {
00:20:55.172  "name": "62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC",
00:20:55.172  "cipher": "AES_CBC",
00:20:55.172  "key": "1234567890abcdef1234567890abcdef"
00:20:55.172  }
00:20:55.172  ]'
00:20:55.172    19:19:25 sma.sma_crypto -- sma/crypto.sh@214 -- # jq -r '.[0].key'
00:20:55.172   19:19:26 sma.sma_crypto -- sma/crypto.sh@214 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@215 -- # jq -r '.[0].cipher'
00:20:55.172   19:19:26 sma.sma_crypto -- sma/crypto.sh@215 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:20:55.172   19:19:26 sma.sma_crypto -- sma/crypto.sh@218 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:20:55.172   19:19:26 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:55.172   19:19:26 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:55.172   19:19:26 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:55.172     19:19:26 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:55.172      19:19:26 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:55.172      19:19:26 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:55.172  "nvmf": {
00:20:55.172    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:55.172    "discovery": {
00:20:55.172      "discovery_endpoints": [
00:20:55.172        {
00:20:55.172          "trtype": "tcp",
00:20:55.172          "traddr": "127.0.0.1",
00:20:55.172          "trsvcid": "8009"
00:20:55.172        }
00:20:55.172      ]
00:20:55.172    }
00:20:55.172  }'
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:55.172     19:19:26 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:20:55.172     19:19:26 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:55.172     19:19:26 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:55.172     19:19:26 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:20:55.172     19:19:26 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:55.172      19:19:26 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:20:55.172     19:19:26 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:55.172    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:20:55.172  }'
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:55.172    19:19:26 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:55.441  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:55.441  I0000 00:00:1733509166.328222  607134 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:55.441  I0000 00:00:1733509166.330113  607134 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:55.441  I0000 00:00:1733509166.331721  607148 subchannel.cc:806] subchannel 0x5603e8b56560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5603e8b6cf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5603e8b236e0, grpc.internal.client_channel_call_destination=0x7f0e2e26b390, grpc.internal.event_engine=0x5603e8b525b0, grpc.internal.security_connector=0x5603e8b52540, grpc.internal.subchannel_pool=0x5603e8ba6410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5603e8a70a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:26.331248599+01:00"}), backing off for 999 ms
00:20:55.442  {}
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@221 -- # rpc_cmd bdev_nvme_get_discovery_info
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@221 -- # jq -r '. | length'
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@221 -- # [[ 1 -eq 1 ]]
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@222 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@222 -- # jq -r '.[0].namespaces | length'
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@222 -- # [[ 1 -eq 1 ]]
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@223 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 ns ns_bdev
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:20:55.703    "nsid": 1,
00:20:55.703    "bdev_name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:55.703    "name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:55.703    "nguid": "8FFB3C5C7B4D454DA610242A29A94AD2",
00:20:55.703    "uuid": "8ffb3c5c-7b4d-454d-a610-242a29a94ad2"
00:20:55.703  }'
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=62a292b5-5519-4502-8fda-12735ba8c6f8
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 62a292b5-5519-4502-8fda-12735ba8c6f8
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.703    19:19:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:20:55.703   19:19:26 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 == \8\f\f\b\3\c\5\c\-\7\b\4\d\-\4\5\4\d\-\a\6\1\0\-\2\4\2\a\2\9\a\9\4\a\d\2 ]]
00:20:55.703    19:19:26 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:20:55.961    19:19:26 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:55.961    19:19:26 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=8FFB3C5C-7B4D-454D-A610-242A29A94AD2
00:20:55.961    19:19:26 sma.sma_crypto -- sma/common.sh@41 -- # echo 8FFB3C5C7B4D454DA610242A29A94AD2
00:20:55.961   19:19:26 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 8FFB3C5C7B4D454DA610242A29A94AD2 == \8\F\F\B\3\C\5\C\7\B\4\D\4\5\4\D\A\6\1\0\2\4\2\A\2\9\A\9\4\A\D\2 ]]
00:20:55.961    19:19:26 sma.sma_crypto -- sma/crypto.sh@224 -- # rpc_cmd bdev_get_bdevs
00:20:55.961    19:19:26 sma.sma_crypto -- sma/crypto.sh@224 -- # jq -r '.[] | select(.product_name == "crypto")'
00:20:55.961    19:19:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.961    19:19:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.961    19:19:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.961   19:19:26 sma.sma_crypto -- sma/crypto.sh@224 -- # crypto_bdev2='{
00:20:55.961    "name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:55.961    "aliases": [
00:20:55.961      "06c867f3-0509-5b2f-b4a5-e77f727eb0c4"
00:20:55.961    ],
00:20:55.961    "product_name": "crypto",
00:20:55.961    "block_size": 4096,
00:20:55.961    "num_blocks": 8192,
00:20:55.961    "uuid": "06c867f3-0509-5b2f-b4a5-e77f727eb0c4",
00:20:55.961    "assigned_rate_limits": {
00:20:55.961      "rw_ios_per_sec": 0,
00:20:55.961      "rw_mbytes_per_sec": 0,
00:20:55.961      "r_mbytes_per_sec": 0,
00:20:55.961      "w_mbytes_per_sec": 0
00:20:55.961    },
00:20:55.961    "claimed": true,
00:20:55.961    "claim_type": "exclusive_write",
00:20:55.961    "zoned": false,
00:20:55.962    "supported_io_types": {
00:20:55.962      "read": true,
00:20:55.962      "write": true,
00:20:55.962      "unmap": true,
00:20:55.962      "flush": true,
00:20:55.962      "reset": true,
00:20:55.962      "nvme_admin": false,
00:20:55.962      "nvme_io": false,
00:20:55.962      "nvme_io_md": false,
00:20:55.962      "write_zeroes": true,
00:20:55.962      "zcopy": false,
00:20:55.962      "get_zone_info": false,
00:20:55.962      "zone_management": false,
00:20:55.962      "zone_append": false,
00:20:55.962      "compare": false,
00:20:55.962      "compare_and_write": false,
00:20:55.962      "abort": false,
00:20:55.962      "seek_hole": false,
00:20:55.962      "seek_data": false,
00:20:55.962      "copy": false,
00:20:55.962      "nvme_iov_md": false
00:20:55.962    },
00:20:55.962    "memory_domains": [
00:20:55.962      {
00:20:55.962        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:20:55.962        "dma_device_type": 2
00:20:55.962      }
00:20:55.962    ],
00:20:55.962    "driver_specific": {
00:20:55.962      "crypto": {
00:20:55.962        "base_bdev_name": "6596f4be-470a-477f-946c-8a1d13d72cac0n1",
00:20:55.962        "name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:55.962        "key_name": "62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC"
00:20:55.962      }
00:20:55.962    }
00:20:55.962  }'
00:20:55.962    19:19:26 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:20:55.962    19:19:26 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:20:55.962   19:19:26 sma.sma_crypto -- sma/crypto.sh@225 -- # [[ 62a292b5-5519-4502-8fda-12735ba8c6f8 == 62a292b5-5519-4502-8fda-12735ba8c6f8 ]]
00:20:55.962    19:19:26 sma.sma_crypto -- sma/crypto.sh@226 -- # jq -r .driver_specific.crypto.key_name
00:20:55.962   19:19:26 sma.sma_crypto -- sma/crypto.sh@226 -- # key_name=62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC
00:20:55.962    19:19:26 sma.sma_crypto -- sma/crypto.sh@227 -- # rpc_cmd accel_crypto_keys_get -k 62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC
00:20:55.962    19:19:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.962    19:19:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:55.962    19:19:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.962   19:19:26 sma.sma_crypto -- sma/crypto.sh@227 -- # key_obj='[
00:20:55.962  {
00:20:55.962  "name": "62a292b5-5519-4502-8fda-12735ba8c6f8_AES_CBC",
00:20:55.962  "cipher": "AES_CBC",
00:20:55.962  "key": "1234567890abcdef1234567890abcdef"
00:20:55.962  }
00:20:55.962  ]'
00:20:55.962    19:19:26 sma.sma_crypto -- sma/crypto.sh@228 -- # jq -r '.[0].key'
00:20:55.962   19:19:26 sma.sma_crypto -- sma/crypto.sh@228 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:20:55.962    19:19:26 sma.sma_crypto -- sma/crypto.sh@229 -- # jq -r '.[0].cipher'
00:20:56.220   19:19:26 sma.sma_crypto -- sma/crypto.sh@229 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:20:56.220   19:19:26 sma.sma_crypto -- sma/crypto.sh@232 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_XTS 1234567890abcdef1234567890abcdef
00:20:56.220   19:19:26 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:20:56.220   19:19:26 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_XTS 1234567890abcdef1234567890abcdef
00:20:56.220   19:19:26 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:20:56.220   19:19:26 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.220    19:19:26 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:20:56.220   19:19:26 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.220   19:19:26 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_XTS 1234567890abcdef1234567890abcdef
00:20:56.220   19:19:26 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:56.220   19:19:26 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:56.220   19:19:26 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_XTS 1234567890abcdef1234567890abcdef
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=AES_XTS key=1234567890abcdef1234567890abcdef key2= config
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:56.220     19:19:26 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:56.220      19:19:26 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:56.220      19:19:26 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:56.220  "nvmf": {
00:20:56.220    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:56.220    "discovery": {
00:20:56.220      "discovery_endpoints": [
00:20:56.220        {
00:20:56.220          "trtype": "tcp",
00:20:56.220          "traddr": "127.0.0.1",
00:20:56.220          "trsvcid": "8009"
00:20:56.220        }
00:20:56.220      ]
00:20:56.220    }
00:20:56.220  }'
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_XTS ]]
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:56.220     19:19:26 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_XTS
00:20:56.220     19:19:26 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:56.220     19:19:26 sma.sma_crypto -- sma/common.sh@29 -- # echo 1
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:56.220     19:19:26 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:20:56.220     19:19:26 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:56.220      19:19:26 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:56.220    19:19:26 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:20:56.220     19:19:26 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:56.221    19:19:26 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:56.221    "cipher": 1,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:20:56.221  }'
00:20:56.221    19:19:26 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:56.221    19:19:26 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:56.479  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:56.479  I0000 00:00:1733509167.214385  607334 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:56.479  I0000 00:00:1733509167.216353  607334 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:56.479  I0000 00:00:1733509167.217979  607353 subchannel.cc:806] subchannel 0x55a16cd0f560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a16cd25f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a16ccdc6e0, grpc.internal.client_channel_call_destination=0x7f98bb54b390, grpc.internal.event_engine=0x55a16cd0b5b0, grpc.internal.security_connector=0x55a16cd0b540, grpc.internal.subchannel_pool=0x55a16cd5f410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a16cc29a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:27.217476905+01:00"}), backing off for 1000 ms
00:20:56.479  Traceback (most recent call last):
00:20:56.479    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:56.479      main(sys.argv[1:])
00:20:56.479    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:56.479      result = client.call(request['method'], request.get('params', {}))
00:20:56.479               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.480    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:56.480      response = func(request=json_format.ParseDict(params, input()))
00:20:56.480                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.480    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:56.480      return _end_unary_response_blocking(state, call, False, None)
00:20:56.480             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.480    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:56.480      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:56.480      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.480  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:56.480  	status = StatusCode.INVALID_ARGUMENT
00:20:56.480  	details = "Invalid volume crypto configuration: bad cipher"
00:20:56.480  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-12-06T19:19:27.236014979+01:00"}"
00:20:56.480  >
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:56.480   19:19:27 sma.sma_crypto -- sma/crypto.sh@234 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.480    19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.480   19:19:27 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:20:56.480   19:19:27 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:56.480   19:19:27 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:56.480   19:19:27 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=AES_CBC key=deadbeefcafebabefeedbeefbabecafe key2= config
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:56.480     19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:56.480      19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:56.480      19:19:27 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:56.480  "nvmf": {
00:20:56.480    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:56.480    "discovery": {
00:20:56.480      "discovery_endpoints": [
00:20:56.480        {
00:20:56.480          "trtype": "tcp",
00:20:56.480          "traddr": "127.0.0.1",
00:20:56.480          "trsvcid": "8009"
00:20:56.480        }
00:20:56.480      ]
00:20:56.480    }
00:20:56.480  }'
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:56.480     19:19:27 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:20:56.480     19:19:27 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:56.480     19:19:27 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:56.480     19:19:27 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:20:56.480     19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:56.480      19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:20:56.480     19:19:27 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:56.480    "cipher": 0,"key": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:20:56.480  }'
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:56.480    19:19:27 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:56.739  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:56.739  I0000 00:00:1733509167.541940  607374 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:56.739  I0000 00:00:1733509167.543909  607374 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:56.739  I0000 00:00:1733509167.545582  607387 subchannel.cc:806] subchannel 0x557f7e309560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557f7e31ff20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557f7e2d66e0, grpc.internal.client_channel_call_destination=0x7f3630ea6390, grpc.internal.event_engine=0x557f7e3055b0, grpc.internal.security_connector=0x557f7e305540, grpc.internal.subchannel_pool=0x557f7e359410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557f7e223a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:27.545070192+01:00"}), backing off for 999 ms
00:20:56.739  Traceback (most recent call last):
00:20:56.739    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:56.739      main(sys.argv[1:])
00:20:56.739    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:56.739      result = client.call(request['method'], request.get('params', {}))
00:20:56.739               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.739    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:56.739      response = func(request=json_format.ParseDict(params, input()))
00:20:56.739                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.739    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:56.739      return _end_unary_response_blocking(state, call, False, None)
00:20:56.739             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.739    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:56.739      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:56.739      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.739  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:56.739  	status = StatusCode.INVALID_ARGUMENT
00:20:56.739  	details = "Invalid volume crypto configuration: bad key"
00:20:56.739  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad key", grpc_status:3, created_time:"2024-12-06T19:19:27.561258851+01:00"}"
00:20:56.739  >
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:56.739   19:19:27 sma.sma_crypto -- sma/crypto.sh@236 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.739    19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.739   19:19:27 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:20:56.739   19:19:27 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:56.739   19:19:27 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:56.739   19:19:27 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:56.739    19:19:27 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:20:56.739    19:19:27 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2=deadbeefcafebabefeedbeefbabecafe config
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:56.740     19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:56.740      19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:56.740      19:19:27 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:56.740  "nvmf": {
00:20:56.740    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:56.740    "discovery": {
00:20:56.740      "discovery_endpoints": [
00:20:56.740        {
00:20:56.740          "trtype": "tcp",
00:20:56.740          "traddr": "127.0.0.1",
00:20:56.740          "trsvcid": "8009"
00:20:56.740        }
00:20:56.740      ]
00:20:56.740    }
00:20:56.740  }'
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:56.740     19:19:27 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:20:56.740     19:19:27 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:56.740     19:19:27 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:56.740     19:19:27 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:20:56.740     19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:56.740      19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n deadbeefcafebabefeedbeefbabecafe ]]
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@55 -- # crypto+=("\"key2\": \"$(format_key $key2)\"")
00:20:56.740     19:19:27 sma.sma_crypto -- sma/crypto.sh@55 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:20:56.740     19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:56.740      19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:20:56.740     19:19:27 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:56.740    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=","key2": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:20:56.740  }'
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:56.740    19:19:27 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:56.998  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:56.998  I0000 00:00:1733509167.881595  607410 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:56.998  I0000 00:00:1733509167.883487  607410 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:56.998  I0000 00:00:1733509167.885242  607451 subchannel.cc:806] subchannel 0x55a5e3f1d560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a5e3f33f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a5e3eea6e0, grpc.internal.client_channel_call_destination=0x7fd2d8d11390, grpc.internal.event_engine=0x55a5e3f2be90, grpc.internal.security_connector=0x55a5e3e53f80, grpc.internal.subchannel_pool=0x55a5e3f2be40, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a5e3ee5840, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:27.884728419+01:00"}), backing off for 1000 ms
00:20:56.998  Traceback (most recent call last):
00:20:56.998    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:56.998      main(sys.argv[1:])
00:20:56.998    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:56.998      result = client.call(request['method'], request.get('params', {}))
00:20:56.998               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.998    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:56.998      response = func(request=json_format.ParseDict(params, input()))
00:20:56.998                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.998    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:56.998      return _end_unary_response_blocking(state, call, False, None)
00:20:56.998             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.998    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:56.998      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:56.998      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:56.998  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:56.998  	status = StatusCode.INVALID_ARGUMENT
00:20:56.998  	details = "Invalid volume crypto configuration: bad key2"
00:20:56.998  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:27.902378918+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad key2"}"
00:20:56.998  >
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:56.998   19:19:27 sma.sma_crypto -- sma/crypto.sh@238 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:20:56.998   19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.998    19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:20:56.999   19:19:27 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.999   19:19:27 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:56.999   19:19:27 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:56.999   19:19:27 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:56.999   19:19:27 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:56.999    19:19:27 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:56.999    19:19:27 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:20:56.999    19:19:27 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:56.999     19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:56.999      19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:56.999      19:19:27 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:57.256  "nvmf": {
00:20:57.256    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:57.256    "discovery": {
00:20:57.256      "discovery_endpoints": [
00:20:57.256        {
00:20:57.256          "trtype": "tcp",
00:20:57.256          "traddr": "127.0.0.1",
00:20:57.256          "trsvcid": "8009"
00:20:57.256        }
00:20:57.256      ]
00:20:57.256    }
00:20:57.256  }'
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:57.256     19:19:27 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:20:57.256     19:19:27 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:57.256     19:19:27 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:57.256     19:19:27 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:20:57.256     19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:57.256      19:19:27 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:20:57.256     19:19:27 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:57.256    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:20:57.256  }'
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:57.256    19:19:27 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:57.516  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:57.516  I0000 00:00:1733509168.218821  607525 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:57.516  I0000 00:00:1733509168.220553  607525 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:57.516  I0000 00:00:1733509168.222303  607588 subchannel.cc:806] subchannel 0x55863af38560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55863af4ef20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55863af056e0, grpc.internal.client_channel_call_destination=0x7f93cb5b3390, grpc.internal.event_engine=0x55863af345b0, grpc.internal.security_connector=0x55863af34540, grpc.internal.subchannel_pool=0x55863af88410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55863ae52a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:28.221768762+01:00"}), backing off for 1000 ms
00:20:57.516  Traceback (most recent call last):
00:20:57.516    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:57.516      main(sys.argv[1:])
00:20:57.516    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:57.516      result = client.call(request['method'], request.get('params', {}))
00:20:57.516               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:57.516    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:57.516      response = func(request=json_format.ParseDict(params, input()))
00:20:57.516                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:57.516    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:57.516      return _end_unary_response_blocking(state, call, False, None)
00:20:57.516             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:57.516    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:57.516      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:57.516      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:57.516  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:57.516  	status = StatusCode.INVALID_ARGUMENT
00:20:57.516  	details = "Invalid volume crypto configuration: bad cipher"
00:20:57.516  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:28.23962411+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:20:57.516  >
00:20:57.516   19:19:28 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:20:57.516   19:19:28 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:57.516   19:19:28 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:57.516   19:19:28 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:57.516   19:19:28 sma.sma_crypto -- sma/crypto.sh@241 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:57.516   19:19:28 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 ns ns_bdev
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.516   19:19:28 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:20:57.516    "nsid": 1,
00:20:57.516    "bdev_name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:57.516    "name": "62a292b5-5519-4502-8fda-12735ba8c6f8",
00:20:57.516    "nguid": "8FFB3C5C7B4D454DA610242A29A94AD2",
00:20:57.516    "uuid": "8ffb3c5c-7b4d-454d-a610-242a29a94ad2"
00:20:57.516  }'
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:20:57.516   19:19:28 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=62a292b5-5519-4502-8fda-12735ba8c6f8
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 62a292b5-5519-4502-8fda-12735ba8c6f8
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.516   19:19:28 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:57.516    19:19:28 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.516   19:19:28 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:20:57.516   19:19:28 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 == \8\f\f\b\3\c\5\c\-\7\b\4\d\-\4\5\4\d\-\a\6\1\0\-\2\4\2\a\2\9\a\9\4\a\d\2 ]]
00:20:57.516    19:19:28 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:20:57.775    19:19:28 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:57.775    19:19:28 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=8FFB3C5C-7B4D-454D-A610-242A29A94AD2
00:20:57.775    19:19:28 sma.sma_crypto -- sma/common.sh@41 -- # echo 8FFB3C5C7B4D454DA610242A29A94AD2
00:20:57.775   19:19:28 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 8FFB3C5C7B4D454DA610242A29A94AD2 == \8\F\F\B\3\C\5\C\7\B\4\D\4\5\4\D\A\6\1\0\2\4\2\A\2\9\A\9\4\A\D\2 ]]
00:20:57.775   19:19:28 sma.sma_crypto -- sma/crypto.sh@243 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:57.775   19:19:28 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:57.775    19:19:28 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:57.775    19:19:28 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:58.034  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:58.035  I0000 00:00:1733509168.761352  607624 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:58.035  I0000 00:00:1733509168.763251  607624 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:58.035  I0000 00:00:1733509168.764801  607639 subchannel.cc:806] subchannel 0x5647abb46560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5647abb5cf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5647abb136e0, grpc.internal.client_channel_call_destination=0x7f82fb498390, grpc.internal.event_engine=0x5647abb425b0, grpc.internal.security_connector=0x5647abac6fb0, grpc.internal.subchannel_pool=0x5647abb96410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5647aba60a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:28.7643063+01:00"}), backing off for 999 ms
00:20:58.035  {}
00:20:58.035   19:19:28 sma.sma_crypto -- sma/crypto.sh@247 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:58.035   19:19:28 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:20:58.035   19:19:28 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:58.035   19:19:28 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:20:58.035   19:19:28 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:58.035    19:19:28 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:20:58.035   19:19:28 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:58.035   19:19:28 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:58.035   19:19:28 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:58.035   19:19:28 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:20:58.035   19:19:28 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:58.035     19:19:28 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:58.035      19:19:28 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:58.035      19:19:28 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:58.035  "nvmf": {
00:20:58.035    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:58.035    "discovery": {
00:20:58.035      "discovery_endpoints": [
00:20:58.035        {
00:20:58.035          "trtype": "tcp",
00:20:58.035          "traddr": "127.0.0.1",
00:20:58.035          "trsvcid": "8009"
00:20:58.035        }
00:20:58.035      ]
00:20:58.035    }
00:20:58.035  }'
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:58.035     19:19:28 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:20:58.035     19:19:28 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:58.035     19:19:28 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:58.035     19:19:28 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:20:58.035     19:19:28 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:20:58.035      19:19:28 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:20:58.035     19:19:28 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:58.035    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:20:58.035  }'
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:58.035    19:19:28 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:20:58.296  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:58.296  I0000 00:00:1733509169.159230  607662 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:58.296  I0000 00:00:1733509169.161118  607662 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:58.296  I0000 00:00:1733509169.162755  607794 subchannel.cc:806] subchannel 0x557562fc1560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557562fd7f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557562f8e6e0, grpc.internal.client_channel_call_destination=0x7f3884033390, grpc.internal.event_engine=0x557562fbd5b0, grpc.internal.security_connector=0x557562fbd540, grpc.internal.subchannel_pool=0x557563011410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557562edba60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:29.162284033+01:00"}), backing off for 999 ms
00:20:59.679  Traceback (most recent call last):
00:20:59.679    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:20:59.679      main(sys.argv[1:])
00:20:59.679    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:20:59.679      result = client.call(request['method'], request.get('params', {}))
00:20:59.679               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:59.679    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:20:59.679      response = func(request=json_format.ParseDict(params, input()))
00:20:59.679                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:59.679    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:20:59.679      return _end_unary_response_blocking(state, call, False, None)
00:20:59.679             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:59.679    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:20:59.679      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:20:59.679      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:20:59.679  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:20:59.679  	status = StatusCode.INVALID_ARGUMENT
00:20:59.679  	details = "Invalid volume crypto configuration: bad cipher"
00:20:59.679  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-12-06T19:19:30.289783053+01:00"}"
00:20:59.679  >
00:20:59.679   19:19:30 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:20:59.679   19:19:30 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:59.679   19:19:30 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:59.679   19:19:30 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:59.679    19:19:30 sma.sma_crypto -- sma/crypto.sh@248 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.679    19:19:30 sma.sma_crypto -- sma/crypto.sh@248 -- # jq -r '.[0].namespaces | length'
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.679   19:19:30 sma.sma_crypto -- sma/crypto.sh@248 -- # [[ 0 -eq 0 ]]
00:20:59.679    19:19:30 sma.sma_crypto -- sma/crypto.sh@249 -- # rpc_cmd bdev_nvme_get_discovery_info
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.679    19:19:30 sma.sma_crypto -- sma/crypto.sh@249 -- # jq -r '. | length'
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.679   19:19:30 sma.sma_crypto -- sma/crypto.sh@249 -- # [[ 0 -eq 0 ]]
00:20:59.679    19:19:30 sma.sma_crypto -- sma/crypto.sh@250 -- # rpc_cmd bdev_get_bdevs
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.679    19:19:30 sma.sma_crypto -- sma/crypto.sh@250 -- # jq -r length
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:20:59.679    19:19:30 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.679   19:19:30 sma.sma_crypto -- sma/crypto.sh@250 -- # [[ 0 -eq 0 ]]
00:20:59.679   19:19:30 sma.sma_crypto -- sma/crypto.sh@252 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:20:59.679   19:19:30 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:59.937  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:20:59.937  I0000 00:00:1733509170.660509  607960 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:20:59.937  I0000 00:00:1733509170.662410  607960 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:20:59.937  I0000 00:00:1733509170.663990  607961 subchannel.cc:806] subchannel 0x555ec274d560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555ec2763f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555ec271a6e0, grpc.internal.client_channel_call_destination=0x7f0d1d9ec390, grpc.internal.event_engine=0x555ec27495b0, grpc.internal.security_connector=0x555ec268dd60, grpc.internal.subchannel_pool=0x555ec279d410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555ec2667a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:30.663403534+01:00"}), backing off for 1000 ms
00:20:59.937  {}
00:20:59.937    19:19:30 sma.sma_crypto -- sma/crypto.sh@255 -- # create_device 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:20:59.937    19:19:30 sma.sma_crypto -- sma/crypto.sh@255 -- # jq -r .handle
00:20:59.937    19:19:30 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:20:59.937      19:19:30 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:20:59.937       19:19:30 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:20:59.937       19:19:30 sma.sma_crypto -- sma/common.sh@20 -- # python
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:20:59.937  "nvmf": {
00:20:59.937    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:20:59.937    "discovery": {
00:20:59.937      "discovery_endpoints": [
00:20:59.937        {
00:20:59.937          "trtype": "tcp",
00:20:59.937          "traddr": "127.0.0.1",
00:20:59.937          "trsvcid": "8009"
00:20:59.937        }
00:20:59.937      ]
00:20:59.937    }
00:20:59.937  }'
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:20:59.937      19:19:30 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:20:59.937      19:19:30 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:20:59.937      19:19:30 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:20:59.937      19:19:30 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:20:59.937      19:19:30 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:20:59.937       19:19:30 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:20:59.937      19:19:30 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:20:59.937    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:20:59.937  }'
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:20:59.937     19:19:30 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:21:00.195  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:00.195  I0000 00:00:1733509170.985580  607984 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:00.195  I0000 00:00:1733509170.987307  607984 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:00.195  I0000 00:00:1733509170.988988  608004 subchannel.cc:806] subchannel 0x5614deda4560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5614dedbaf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5614ded716e0, grpc.internal.client_channel_call_destination=0x7ff745568390, grpc.internal.event_engine=0x5614dedc0020, grpc.internal.security_connector=0x5614decf1610, grpc.internal.subchannel_pool=0x5614dedb2e40, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5614ded16330, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:30.988512915+01:00"}), backing off for 1000 ms
00:21:01.637  [2024-12-06 19:19:32.122793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@255 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@256 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 ns ns_bdev
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:21:01.637    "nsid": 1,
00:21:01.637    "bdev_name": "0858dfa4-f324-4e13-a4d7-2520cb41e8a0",
00:21:01.637    "name": "0858dfa4-f324-4e13-a4d7-2520cb41e8a0",
00:21:01.637    "nguid": "8FFB3C5C7B4D454DA610242A29A94AD2",
00:21:01.637    "uuid": "8ffb3c5c-7b4d-454d-a610-242a29a94ad2"
00:21:01.637  }'
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=0858dfa4-f324-4e13-a4d7-2520cb41e8a0
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 0858dfa4-f324-4e13-a4d7-2520cb41e8a0
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:01.637    19:19:32 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 == \8\f\f\b\3\c\5\c\-\7\b\4\d\-\4\5\4\d\-\a\6\1\0\-\2\4\2\a\2\9\a\9\4\a\d\2 ]]
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:21:01.637    19:19:32 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=8FFB3C5C-7B4D-454D-A610-242A29A94AD2
00:21:01.637    19:19:32 sma.sma_crypto -- sma/common.sh@41 -- # echo 8FFB3C5C7B4D454DA610242A29A94AD2
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 8FFB3C5C7B4D454DA610242A29A94AD2 == \8\F\F\B\3\C\5\C\7\B\4\D\4\5\4\D\A\6\1\0\2\4\2\A\2\9\A\9\4\A\D\2 ]]
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:21:01.637   19:19:32 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:01.637    19:19:32 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:21:01.637    19:19:32 sma.sma_crypto -- sma/common.sh@20 -- # python
00:21:01.896  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:01.896  I0000 00:00:1733509172.688478  608187 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:01.896  I0000 00:00:1733509172.690271  608187 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:01.896  I0000 00:00:1733509172.691724  608307 subchannel.cc:806] subchannel 0x5586baf29560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5586baf3ff20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5586baef66e0, grpc.internal.client_channel_call_destination=0x7f8908028390, grpc.internal.event_engine=0x5586baf255b0, grpc.internal.security_connector=0x5586baea9fb0, grpc.internal.subchannel_pool=0x5586baf79410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5586bae43a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:32.691242366+01:00"}), backing off for 999 ms
00:21:01.896  {}
00:21:01.896   19:19:32 sma.sma_crypto -- sma/crypto.sh@259 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:21:01.896   19:19:32 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:02.154  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:02.154  I0000 00:00:1733509173.072578  608333 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:02.154  I0000 00:00:1733509173.074513  608333 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:02.154  I0000 00:00:1733509173.076149  608335 subchannel.cc:806] subchannel 0x55d072d51560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d072d67f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d072d1e6e0, grpc.internal.client_channel_call_destination=0x7f4c1115b390, grpc.internal.event_engine=0x55d072d4d5b0, grpc.internal.security_connector=0x55d072c91d60, grpc.internal.subchannel_pool=0x55d072da1410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d072c6ba60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:33.075591039+01:00"}), backing off for 1000 ms
00:21:02.154  {}
00:21:02.413   19:19:33 sma.sma_crypto -- sma/crypto.sh@263 -- # NOT create_device 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:21:02.413   19:19:33 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:21:02.413   19:19:33 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:21:02.413   19:19:33 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=create_device
00:21:02.413   19:19:33 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:02.413    19:19:33 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t create_device
00:21:02.413   19:19:33 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:02.413   19:19:33 sma.sma_crypto -- common/autotest_common.sh@655 -- # create_device 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:21:02.413   19:19:33 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 8 1234567890abcdef1234567890abcdef
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:21:02.413     19:19:33 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:21:02.413      19:19:33 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:21:02.413      19:19:33 sma.sma_crypto -- sma/common.sh@20 -- # python
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:21:02.413  "nvmf": {
00:21:02.413    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:21:02.413    "discovery": {
00:21:02.413      "discovery_endpoints": [
00:21:02.413        {
00:21:02.413          "trtype": "tcp",
00:21:02.413          "traddr": "127.0.0.1",
00:21:02.413          "trsvcid": "8009"
00:21:02.413        }
00:21:02.413      ]
00:21:02.413    }
00:21:02.413  }'
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:21:02.413     19:19:33 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:21:02.413     19:19:33 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:21:02.413     19:19:33 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:21:02.413     19:19:33 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:21:02.413     19:19:33 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:21:02.413      19:19:33 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:21:02.413     19:19:33 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:21:02.413    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:21:02.413  }'
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:21:02.413    19:19:33 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:21:02.672  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:02.672  I0000 00:00:1733509173.417285  608357 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:02.672  I0000 00:00:1733509173.419271  608357 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:02.672  I0000 00:00:1733509173.421059  608380 subchannel.cc:806] subchannel 0x561645c51560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561645c67f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561645c1e6e0, grpc.internal.client_channel_call_destination=0x7fd02e766390, grpc.internal.event_engine=0x561645c6d020, grpc.internal.security_connector=0x561645b9e610, grpc.internal.subchannel_pool=0x561645c5fe40, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561645bc3330, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:33.420550695+01:00"}), backing off for 1000 ms
00:21:03.611  Traceback (most recent call last):
00:21:03.611    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:03.611      main(sys.argv[1:])
00:21:03.611    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:03.611      result = client.call(request['method'], request.get('params', {}))
00:21:03.611               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:03.611    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:03.611      response = func(request=json_format.ParseDict(params, input()))
00:21:03.611                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:03.611    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:03.611      return _end_unary_response_blocking(state, call, False, None)
00:21:03.611             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:03.611    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:03.611      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:03.611      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:03.611  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:03.611  	status = StatusCode.INVALID_ARGUMENT
00:21:03.611  	details = "Invalid volume crypto configuration: bad cipher"
00:21:03.611  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:34.550836001+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:21:03.611  >
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:03.871    19:19:34 sma.sma_crypto -- sma/crypto.sh@264 -- # rpc_cmd bdev_nvme_get_discovery_info
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.871    19:19:34 sma.sma_crypto -- sma/crypto.sh@264 -- # jq -r '. | length'
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.871   19:19:34 sma.sma_crypto -- sma/crypto.sh@264 -- # [[ 0 -eq 0 ]]
00:21:03.871    19:19:34 sma.sma_crypto -- sma/crypto.sh@265 -- # rpc_cmd bdev_get_bdevs
00:21:03.871    19:19:34 sma.sma_crypto -- sma/crypto.sh@265 -- # jq -r length
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.871   19:19:34 sma.sma_crypto -- sma/crypto.sh@265 -- # [[ 0 -eq 0 ]]
00:21:03.871    19:19:34 sma.sma_crypto -- sma/crypto.sh@266 -- # rpc_cmd nvmf_get_subsystems
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:03.871    19:19:34 sma.sma_crypto -- sma/crypto.sh@266 -- # jq -r '[.[] | select(.nqn == "nqn.2016-06.io.spdk:cnode0")] | length'
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.871   19:19:34 sma.sma_crypto -- sma/crypto.sh@266 -- # [[ 0 -eq 0 ]]
00:21:03.871   19:19:34 sma.sma_crypto -- sma/crypto.sh@269 -- # killprocess 606176
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 606176 ']'
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 606176
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:03.871    19:19:34 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 606176
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 606176'
00:21:03.871  killing process with pid 606176
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 606176
00:21:03.871   19:19:34 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 606176
00:21:03.871   19:19:34 sma.sma_crypto -- sma/crypto.sh@278 -- # smapid=608664
00:21:03.871   19:19:34 sma.sma_crypto -- sma/crypto.sh@280 -- # sma_waitforlisten
00:21:03.871   19:19:34 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:21:03.871   19:19:34 sma.sma_crypto -- sma/crypto.sh@270 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:21:03.871   19:19:34 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:21:03.871    19:19:34 sma.sma_crypto -- sma/crypto.sh@270 -- # cat
00:21:03.871   19:19:34 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:21:03.871   19:19:34 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:21:03.871   19:19:34 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:21:03.871   19:19:34 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:21:04.130  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:04.130  I0000 00:00:1733509175.033224  608664 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:05.066   19:19:35 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:21:05.066   19:19:35 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:21:05.066   19:19:35 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:21:05.066   19:19:35 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:21:05.066    19:19:35 sma.sma_crypto -- sma/crypto.sh@281 -- # create_device
00:21:05.066    19:19:35 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:05.066    19:19:35 sma.sma_crypto -- sma/crypto.sh@281 -- # jq -r .handle
00:21:05.323  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:05.323  I0000 00:00:1733509176.067393  608779 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:05.323  I0000 00:00:1733509176.069254  608779 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:05.324  I0000 00:00:1733509176.070688  608836 subchannel.cc:806] subchannel 0x559e4ad2e560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559e4ad44f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559e4acfb6e0, grpc.internal.client_channel_call_destination=0x7f4f662f2390, grpc.internal.event_engine=0x559e4ad2a5b0, grpc.internal.security_connector=0x559e4acaefb0, grpc.internal.subchannel_pool=0x559e4ad7e410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559e4ac48a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:36.070229718+01:00"}), backing off for 999 ms
00:21:05.324  [2024-12-06 19:19:36.091813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:21:05.324   19:19:36 sma.sma_crypto -- sma/crypto.sh@281 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:21:05.324   19:19:36 sma.sma_crypto -- sma/crypto.sh@283 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:21:05.324   19:19:36 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:21:05.324   19:19:36 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:21:05.324   19:19:36 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:21:05.324   19:19:36 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:05.324    19:19:36 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:21:05.324   19:19:36 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:05.324   19:19:36 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:21:05.324   19:19:36 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:21:05.324   19:19:36 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:21:05.324   19:19:36 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 8ffb3c5c-7b4d-454d-a610-242a29a94ad2 AES_CBC 1234567890abcdef1234567890abcdef
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=8ffb3c5c-7b4d-454d-a610-242a29a94ad2 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:21:05.324     19:19:36 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:21:05.324      19:19:36 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 8ffb3c5c-7b4d-454d-a610-242a29a94ad2
00:21:05.324      19:19:36 sma.sma_crypto -- sma/common.sh@20 -- # python
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "j/s8XHtNRU2mECQqKalK0g==",
00:21:05.324  "nvmf": {
00:21:05.324    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:21:05.324    "discovery": {
00:21:05.324      "discovery_endpoints": [
00:21:05.324        {
00:21:05.324          "trtype": "tcp",
00:21:05.324          "traddr": "127.0.0.1",
00:21:05.324          "trsvcid": "8009"
00:21:05.324        }
00:21:05.324      ]
00:21:05.324    }
00:21:05.324  }'
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:21:05.324     19:19:36 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:21:05.324     19:19:36 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:21:05.324     19:19:36 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:21:05.324     19:19:36 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:21:05.324     19:19:36 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:21:05.324      19:19:36 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:21:05.324     19:19:36 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:21:05.324    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:21:05.324  }'
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:21:05.324    19:19:36 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:21:05.582  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:05.582  I0000 00:00:1733509176.420295  608858 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:05.582  I0000 00:00:1733509176.422187  608858 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:05.582  I0000 00:00:1733509176.423874  608877 subchannel.cc:806] subchannel 0x55d7cc381560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d7cc397f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d7cc34e6e0, grpc.internal.client_channel_call_destination=0x7f1310bd8390, grpc.internal.event_engine=0x55d7cc37d5b0, grpc.internal.security_connector=0x55d7cc37d540, grpc.internal.subchannel_pool=0x55d7cc3d1410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d7cc29ba60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:36.423365876+01:00"}), backing off for 1000 ms
00:21:06.964  Traceback (most recent call last):
00:21:06.964    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:06.964      main(sys.argv[1:])
00:21:06.964    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:06.964      result = client.call(request['method'], request.get('params', {}))
00:21:06.964               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:06.964    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:06.964      response = func(request=json_format.ParseDict(params, input()))
00:21:06.964                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:06.964    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:06.964      return _end_unary_response_blocking(state, call, False, None)
00:21:06.964             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:06.964    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:06.964      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:06.964      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:06.964  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:06.965  	status = StatusCode.INVALID_ARGUMENT
00:21:06.965  	details = "Crypto is disabled"
00:21:06.965  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:37.545127991+01:00", grpc_status:3, grpc_message:"Crypto is disabled"}"
00:21:06.965  >
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:06.965    19:19:37 sma.sma_crypto -- sma/crypto.sh@284 -- # rpc_cmd bdev_nvme_get_discovery_info
00:21:06.965    19:19:37 sma.sma_crypto -- sma/crypto.sh@284 -- # jq -r '. | length'
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:06.965   19:19:37 sma.sma_crypto -- sma/crypto.sh@284 -- # [[ 0 -eq 0 ]]
00:21:06.965    19:19:37 sma.sma_crypto -- sma/crypto.sh@285 -- # rpc_cmd bdev_get_bdevs
00:21:06.965    19:19:37 sma.sma_crypto -- sma/crypto.sh@285 -- # jq -r length
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:06.965   19:19:37 sma.sma_crypto -- sma/crypto.sh@285 -- # [[ 0 -eq 0 ]]
00:21:06.965   19:19:37 sma.sma_crypto -- sma/crypto.sh@287 -- # cleanup
00:21:06.965   19:19:37 sma.sma_crypto -- sma/crypto.sh@22 -- # killprocess 608664
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 608664 ']'
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 608664
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608664
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608664'
00:21:06.965  killing process with pid 608664
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 608664
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 608664
00:21:06.965   19:19:37 sma.sma_crypto -- sma/crypto.sh@23 -- # killprocess 605860
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 605860 ']'
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 605860
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:06.965    19:19:37 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605860
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605860'
00:21:06.965  killing process with pid 605860
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 605860
00:21:06.965   19:19:37 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 605860
00:21:08.872   19:19:39 sma.sma_crypto -- sma/crypto.sh@24 -- # killprocess 606174
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 606174 ']'
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 606174
00:21:08.872    19:19:39 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:08.872    19:19:39 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 606174
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 606174'
00:21:08.872  killing process with pid 606174
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 606174
00:21:08.872   19:19:39 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 606174
00:21:11.414   19:19:41 sma.sma_crypto -- sma/crypto.sh@288 -- # trap - SIGINT SIGTERM EXIT
00:21:11.414  
00:21:11.414  real	0m25.001s
00:21:11.414  user	0m51.970s
00:21:11.414  sys	0m3.417s
00:21:11.414   19:19:41 sma.sma_crypto -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:11.414   19:19:41 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:21:11.414  ************************************
00:21:11.414  END TEST sma_crypto
00:21:11.414  ************************************
00:21:11.414   19:19:41 sma -- sma/sma.sh@17 -- # run_test sma_qos /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:21:11.414   19:19:41 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:21:11.414   19:19:41 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:11.414   19:19:41 sma -- common/autotest_common.sh@10 -- # set +x
00:21:11.414  ************************************
00:21:11.414  START TEST sma_qos
00:21:11.414  ************************************
00:21:11.414   19:19:41 sma.sma_qos -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:21:11.414  * Looking for test storage...
00:21:11.414  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:21:11.414    19:19:41 sma.sma_qos -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:11.414     19:19:41 sma.sma_qos -- common/autotest_common.sh@1711 -- # lcov --version
00:21:11.414     19:19:41 sma.sma_qos -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:11.414    19:19:42 sma.sma_qos -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@336 -- # IFS=.-:
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@336 -- # read -ra ver1
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@337 -- # IFS=.-:
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@337 -- # read -ra ver2
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@338 -- # local 'op=<'
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@340 -- # ver1_l=2
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@341 -- # ver2_l=1
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@344 -- # case "$op" in
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@345 -- # : 1
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@365 -- # decimal 1
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@353 -- # local d=1
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@355 -- # echo 1
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@365 -- # ver1[v]=1
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@366 -- # decimal 2
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@353 -- # local d=2
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:11.414     19:19:42 sma.sma_qos -- scripts/common.sh@355 -- # echo 2
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@366 -- # ver2[v]=2
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:11.414    19:19:42 sma.sma_qos -- scripts/common.sh@368 -- # return 0
00:21:11.414    19:19:42 sma.sma_qos -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:11.414    19:19:42 sma.sma_qos -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:11.414  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.414  		--rc genhtml_branch_coverage=1
00:21:11.414  		--rc genhtml_function_coverage=1
00:21:11.414  		--rc genhtml_legend=1
00:21:11.414  		--rc geninfo_all_blocks=1
00:21:11.414  		--rc geninfo_unexecuted_blocks=1
00:21:11.414  		
00:21:11.414  		'
00:21:11.414    19:19:42 sma.sma_qos -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:11.414  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.414  		--rc genhtml_branch_coverage=1
00:21:11.414  		--rc genhtml_function_coverage=1
00:21:11.414  		--rc genhtml_legend=1
00:21:11.414  		--rc geninfo_all_blocks=1
00:21:11.414  		--rc geninfo_unexecuted_blocks=1
00:21:11.414  		
00:21:11.414  		'
00:21:11.414    19:19:42 sma.sma_qos -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:11.414  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.414  		--rc genhtml_branch_coverage=1
00:21:11.414  		--rc genhtml_function_coverage=1
00:21:11.414  		--rc genhtml_legend=1
00:21:11.414  		--rc geninfo_all_blocks=1
00:21:11.414  		--rc geninfo_unexecuted_blocks=1
00:21:11.414  		
00:21:11.414  		'
00:21:11.414    19:19:42 sma.sma_qos -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:11.414  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.414  		--rc genhtml_branch_coverage=1
00:21:11.414  		--rc genhtml_function_coverage=1
00:21:11.414  		--rc genhtml_legend=1
00:21:11.414  		--rc geninfo_all_blocks=1
00:21:11.414  		--rc geninfo_unexecuted_blocks=1
00:21:11.414  		
00:21:11.414  		'
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@13 -- # smac=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@15 -- # device_nvmf_tcp=3
00:21:11.414    19:19:42 sma.sma_qos -- sma/qos.sh@16 -- # printf %u -1
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@16 -- # limit_reserved=18446744073709551615
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@42 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@45 -- # tgtpid=609642
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@55 -- # smapid=609643
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@57 -- # sma_waitforlisten
00:21:11.414   19:19:42 sma.sma_qos -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:21:11.414   19:19:42 sma.sma_qos -- sma/common.sh@8 -- # local sma_port=8080
00:21:11.414   19:19:42 sma.sma_qos -- sma/qos.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:21:11.414   19:19:42 sma.sma_qos -- sma/common.sh@10 -- # (( i = 0 ))
00:21:11.414    19:19:42 sma.sma_qos -- sma/qos.sh@47 -- # cat
00:21:11.414   19:19:42 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:21:11.414   19:19:42 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:21:11.414   19:19:42 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:21:11.414  [2024-12-06 19:19:42.150994] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization...
00:21:11.414  [2024-12-06 19:19:42.151163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609642 ]
00:21:11.415  EAL: No free 2048 kB hugepages reported on node 1
00:21:11.415  [2024-12-06 19:19:42.298551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:11.674  [2024-12-06 19:19:42.419372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:12.242   19:19:43 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:21:12.242   19:19:43 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:21:12.242   19:19:43 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:21:12.242   19:19:43 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:21:12.503  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:12.503  I0000 00:00:1733509183.287397  609643 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:12.503  [2024-12-06 19:19:43.302259] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:13.442   19:19:44 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:21:13.442   19:19:44 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:21:13.442   19:19:44 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:21:13.442   19:19:44 sma.sma_qos -- sma/common.sh@12 -- # return 0
00:21:13.442   19:19:44 sma.sma_qos -- sma/qos.sh@60 -- # rpc_cmd bdev_null_create null0 100 4096
00:21:13.442   19:19:44 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.442   19:19:44 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:21:13.442  null0
00:21:13.442   19:19:44 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.442    19:19:44 sma.sma_qos -- sma/qos.sh@61 -- # rpc_cmd bdev_get_bdevs -b null0
00:21:13.442    19:19:44 sma.sma_qos -- sma/qos.sh@61 -- # jq -r '.[].uuid'
00:21:13.442    19:19:44 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.442    19:19:44 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:21:13.442    19:19:44 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.442   19:19:44 sma.sma_qos -- sma/qos.sh@61 -- # uuid=9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:13.442    19:19:44 sma.sma_qos -- sma/qos.sh@62 -- # create_device 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:13.442    19:19:44 sma.sma_qos -- sma/qos.sh@62 -- # jq -r .handle
00:21:13.442    19:19:44 sma.sma_qos -- sma/qos.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:13.442     19:19:44 sma.sma_qos -- sma/qos.sh@24 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:13.442     19:19:44 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:13.701  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:13.701  I0000 00:00:1733509184.440716  609950 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:13.701  I0000 00:00:1733509184.442566  609950 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:13.701  I0000 00:00:1733509184.444228  609953 subchannel.cc:806] subchannel 0x561270e6f560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561270e85f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561270e3c6e0, grpc.internal.client_channel_call_destination=0x7fa13c529390, grpc.internal.event_engine=0x561270e6b5b0, grpc.internal.security_connector=0x561270e6b540, grpc.internal.subchannel_pool=0x561270ebf410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561270d89a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:44.443712823+01:00"}), backing off for 1000 ms
00:21:13.701  [2024-12-06 19:19:44.474560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:21:13.701   19:19:44 sma.sma_qos -- sma/qos.sh@62 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:21:13.701   19:19:44 sma.sma_qos -- sma/qos.sh@65 -- # diff /dev/fd/62 /dev/fd/61
00:21:13.701    19:19:44 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:21:13.701    19:19:44 sma.sma_qos -- sma/qos.sh@65 -- # get_qos_caps 3
00:21:13.701    19:19:44 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:21:13.701    19:19:44 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:21:13.701     19:19:44 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:21:13.701    19:19:44 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:21:13.701    19:19:44 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:21:13.959  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:13.959  I0000 00:00:1733509184.739107  609983 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:13.959  I0000 00:00:1733509184.741030  609983 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:13.959  I0000 00:00:1733509184.742527  609984 subchannel.cc:806] subchannel 0x55719d5404e0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55719d4be640, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55719d3b8020, grpc.internal.client_channel_call_destination=0x7f462c120390, grpc.internal.event_engine=0x55719d36ec90, grpc.internal.security_connector=0x55719d471480, grpc.internal.subchannel_pool=0x55719d4712e0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55719d3894b0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:44.741939068+01:00"}), backing off for 1000 ms
00:21:13.959   19:19:44 sma.sma_qos -- sma/qos.sh@79 -- # NOT get_qos_caps 1234
00:21:13.959   19:19:44 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:21:13.959   19:19:44 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg get_qos_caps 1234
00:21:13.959   19:19:44 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=get_qos_caps
00:21:13.959   19:19:44 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:13.959    19:19:44 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t get_qos_caps
00:21:13.959   19:19:44 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:13.960   19:19:44 sma.sma_qos -- common/autotest_common.sh@655 -- # get_qos_caps 1234
00:21:13.960   19:19:44 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:21:13.960    19:19:44 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:21:13.960   19:19:44 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:21:13.960   19:19:44 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:21:14.219  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:14.219  I0000 00:00:1733509185.011398  610007 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:14.219  I0000 00:00:1733509185.013096  610007 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:14.219  I0000 00:00:1733509185.014613  610137 subchannel.cc:806] subchannel 0x563c3e2db4e0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563c3e259640, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563c3e153020, grpc.internal.client_channel_call_destination=0x7f08d5b39390, grpc.internal.event_engine=0x563c3e109c90, grpc.internal.security_connector=0x563c3e20c480, grpc.internal.subchannel_pool=0x563c3e20c2e0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563c3e1244b0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:45.014072813+01:00"}), backing off for 999 ms
00:21:14.219  Traceback (most recent call last):
00:21:14.219    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 74, in <module>
00:21:14.219      main(sys.argv[1:])
00:21:14.219    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 69, in main
00:21:14.219      result = client.call(request['method'], request.get('params', {}))
00:21:14.219               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:14.219    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 43, in call
00:21:14.219      response = func(request=json_format.ParseDict(params, input()))
00:21:14.219                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:14.219    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:14.219      return _end_unary_response_blocking(state, call, False, None)
00:21:14.219             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:14.219    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:14.219      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:14.219      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:14.219  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:14.219  	status = StatusCode.INVALID_ARGUMENT
00:21:14.219  	details = "Invalid device type"
00:21:14.219  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid device type", grpc_status:3, created_time:"2024-12-06T19:19:45.016026751+01:00"}"
00:21:14.219  >
00:21:14.219   19:19:45 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:21:14.219   19:19:45 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:14.219   19:19:45 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:14.219   19:19:45 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:14.219   19:19:45 sma.sma_qos -- sma/qos.sh@82 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:14.219    19:19:45 sma.sma_qos -- sma/qos.sh@82 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:14.219    19:19:45 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:14.477  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:14.477  I0000 00:00:1733509185.324428  610157 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:14.477  I0000 00:00:1733509185.326212  610157 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:14.477  I0000 00:00:1733509185.327645  610162 subchannel.cc:806] subchannel 0x55d4f997f560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d4f9995f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d4f994c6e0, grpc.internal.client_channel_call_destination=0x7fea0e201390, grpc.internal.event_engine=0x55d4f997b5b0, grpc.internal.security_connector=0x55d4f997b540, grpc.internal.subchannel_pool=0x55d4f99cf410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d4f9899a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:45.327178105+01:00"}), backing off for 999 ms
00:21:14.477  {}
00:21:14.477   19:19:45 sma.sma_qos -- sma/qos.sh@94 -- # diff /dev/fd/62 /dev/fd/61
00:21:14.477    19:19:45 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys
00:21:14.477    19:19:45 sma.sma_qos -- sma/qos.sh@94 -- # rpc_cmd bdev_get_bdevs -b null0
00:21:14.477    19:19:45 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys '.[].assigned_rate_limits'
00:21:14.477    19:19:45 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.477    19:19:45 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:21:14.477    19:19:45 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.477   19:19:45 sma.sma_qos -- sma/qos.sh@106 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:14.477    19:19:45 sma.sma_qos -- sma/qos.sh@106 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:14.477    19:19:45 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:15.043  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:15.043  I0000 00:00:1733509185.686397  610188 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:15.043  I0000 00:00:1733509185.688386  610188 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:15.043  I0000 00:00:1733509185.689959  610195 subchannel.cc:806] subchannel 0x5629bb775560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5629bb78bf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5629bb7426e0, grpc.internal.client_channel_call_destination=0x7fa045a1b390, grpc.internal.event_engine=0x5629bb7715b0, grpc.internal.security_connector=0x5629bb771540, grpc.internal.subchannel_pool=0x5629bb7c5410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5629bb68fa60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:45.689478025+01:00"}), backing off for 1000 ms
00:21:15.043  {}
00:21:15.043   19:19:45 sma.sma_qos -- sma/qos.sh@119 -- # diff /dev/fd/62 /dev/fd/61
00:21:15.043    19:19:45 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys
00:21:15.043    19:19:45 sma.sma_qos -- sma/qos.sh@119 -- # rpc_cmd bdev_get_bdevs -b null0
00:21:15.043    19:19:45 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys '.[].assigned_rate_limits'
00:21:15.043    19:19:45 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.043    19:19:45 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:21:15.043    19:19:45 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:15.043   19:19:45 sma.sma_qos -- sma/qos.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.043    19:19:45 sma.sma_qos -- sma/qos.sh@131 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:15.043    19:19:45 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:15.302  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:15.302  I0000 00:00:1733509186.048560  610221 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:15.302  I0000 00:00:1733509186.050378  610221 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:15.302  I0000 00:00:1733509186.051990  610353 subchannel.cc:806] subchannel 0x559cb4e15560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559cb4e2bf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559cb4de26e0, grpc.internal.client_channel_call_destination=0x7f64463c1390, grpc.internal.event_engine=0x559cb4e115b0, grpc.internal.security_connector=0x559cb4e11540, grpc.internal.subchannel_pool=0x559cb4e65410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559cb4d2fa60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:46.051500898+01:00"}), backing off for 1000 ms
00:21:15.302  {}
00:21:15.302   19:19:46 sma.sma_qos -- sma/qos.sh@145 -- # diff /dev/fd/62 /dev/fd/61
00:21:15.302    19:19:46 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys
00:21:15.302    19:19:46 sma.sma_qos -- sma/qos.sh@145 -- # rpc_cmd bdev_get_bdevs -b null0
00:21:15.302    19:19:46 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys '.[].assigned_rate_limits'
00:21:15.302    19:19:46 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.302    19:19:46 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:21:15.302    19:19:46 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:15.302   19:19:46 sma.sma_qos -- sma/qos.sh@157 -- # unsupported_max_limits=(rd_iops wr_iops)
00:21:15.302   19:19:46 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:21:15.302   19:19:46 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.302    19:19:46 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:15.302    19:19:46 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:15.302    19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:15.302    19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:21:15.302   19:19:46 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.561  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:15.561  I0000 00:00:1733509186.403816  610384 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:15.561  I0000 00:00:1733509186.405820  610384 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:15.561  I0000 00:00:1733509186.407409  610385 subchannel.cc:806] subchannel 0x5587923cc560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5587923e2f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5587923996e0, grpc.internal.client_channel_call_destination=0x7f7543382390, grpc.internal.event_engine=0x5587923c85b0, grpc.internal.security_connector=0x5587923c8540, grpc.internal.subchannel_pool=0x55879241c410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5587922e6a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:46.406904027+01:00"}), backing off for 1000 ms
00:21:15.561  Traceback (most recent call last):
00:21:15.561    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:15.561      main(sys.argv[1:])
00:21:15.561    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:15.561      result = client.call(request['method'], request.get('params', {}))
00:21:15.561               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.561    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:15.561      response = func(request=json_format.ParseDict(params, input()))
00:21:15.561                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.561    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:15.561      return _end_unary_response_blocking(state, call, False, None)
00:21:15.561             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.561    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:15.561      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:15.561      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.561  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:15.561  	status = StatusCode.INVALID_ARGUMENT
00:21:15.561  	details = "Unsupported QoS limit: maximum.rd_iops"
00:21:15.561  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:46.424231609+01:00", grpc_status:3, grpc_message:"Unsupported QoS limit: maximum.rd_iops"}"
00:21:15.561  >
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:15.561   19:19:46 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:21:15.561   19:19:46 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.561    19:19:46 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:15.561    19:19:46 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:15.561    19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:15.561    19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:21:15.561   19:19:46 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.820  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:15.820  I0000 00:00:1733509186.719899  610409 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:15.820  I0000 00:00:1733509186.721763  610409 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:15.820  I0000 00:00:1733509186.723359  610410 subchannel.cc:806] subchannel 0x564422346560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56442235cf20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5644223136e0, grpc.internal.client_channel_call_destination=0x7f17137fe390, grpc.internal.event_engine=0x5644223425b0, grpc.internal.security_connector=0x564422342540, grpc.internal.subchannel_pool=0x564422396410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x564422260a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:46.722861419+01:00"}), backing off for 1000 ms
00:21:15.820  Traceback (most recent call last):
00:21:15.820    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:15.820      main(sys.argv[1:])
00:21:15.820    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:15.820      result = client.call(request['method'], request.get('params', {}))
00:21:15.820               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.820    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:15.820      response = func(request=json_format.ParseDict(params, input()))
00:21:15.820                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.820    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:15.820      return _end_unary_response_blocking(state, call, False, None)
00:21:15.820             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.820    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:15.820      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:15.820      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:15.820  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:15.820  	status = StatusCode.INVALID_ARGUMENT
00:21:15.820  	details = "Unsupported QoS limit: maximum.wr_iops"
00:21:15.820  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-06T19:19:46.737252867+01:00", grpc_status:3, grpc_message:"Unsupported QoS limit: maximum.wr_iops"}"
00:21:15.820  >
00:21:15.820   19:19:46 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:21:15.820   19:19:46 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:15.820   19:19:46 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:15.820   19:19:46 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:15.820   19:19:46 sma.sma_qos -- sma/qos.sh@178 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:15.820    19:19:46 sma.sma_qos -- sma/qos.sh@178 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:15.820    19:19:46 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.078    19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.078    19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:21:16.078   19:19:46 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.337  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:16.337  I0000 00:00:1733509187.032535  610434 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:16.337  I0000 00:00:1733509187.034408  610434 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:16.337  I0000 00:00:1733509187.035961  610448 subchannel.cc:806] subchannel 0x5582e8390560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5582e83a6f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5582e835d6e0, grpc.internal.client_channel_call_destination=0x7f4b0be37390, grpc.internal.event_engine=0x5582e838c5b0, grpc.internal.security_connector=0x5582e838c540, grpc.internal.subchannel_pool=0x5582e83e0410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5582e82aaa60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:47.035519682+01:00"}), backing off for 1000 ms
00:21:16.337  [2024-12-06 19:19:47.045679] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0-invalid' does not exist
00:21:16.337  Traceback (most recent call last):
00:21:16.337    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:16.337      main(sys.argv[1:])
00:21:16.337    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:16.337      result = client.call(request['method'], request.get('params', {}))
00:21:16.337               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.337    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:16.337      response = func(request=json_format.ParseDict(params, input()))
00:21:16.337                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.337    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:16.337      return _end_unary_response_blocking(state, call, False, None)
00:21:16.337             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.337    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:16.337      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:16.337      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.337  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:16.337  	status = StatusCode.NOT_FOUND
00:21:16.337  	details = "No device associated with device_handle could be found"
00:21:16.337  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"No device associated with device_handle could be found", grpc_status:5, created_time:"2024-12-06T19:19:47.049985287+01:00"}"
00:21:16.337  >
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:16.337   19:19:47 sma.sma_qos -- sma/qos.sh@191 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.337     19:19:47 sma.sma_qos -- sma/qos.sh@191 -- # uuidgen
00:21:16.337    19:19:47 sma.sma_qos -- sma/qos.sh@191 -- # uuid2base64 8aef8a38-5082-4ac3-a198-646d43ea34b5
00:21:16.337    19:19:47 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.337   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.338    19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.338   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.338    19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.338   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.338   19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.338   19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:21:16.338   19:19:47 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.598  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:16.598  I0000 00:00:1733509187.366083  610568 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:16.598  I0000 00:00:1733509187.367937  610568 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:16.598  I0000 00:00:1733509187.369568  610595 subchannel.cc:806] subchannel 0x5568553db560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5568553f1f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5568553a86e0, grpc.internal.client_channel_call_destination=0x7fdbe1273390, grpc.internal.event_engine=0x5568553d75b0, grpc.internal.security_connector=0x5568553d7540, grpc.internal.subchannel_pool=0x55685542b410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5568552f5a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:47.369035893+01:00"}), backing off for 1000 ms
00:21:16.598  [2024-12-06 19:19:47.374778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 8aef8a38-5082-4ac3-a198-646d43ea34b5
00:21:16.598  Traceback (most recent call last):
00:21:16.598    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:16.598      main(sys.argv[1:])
00:21:16.598    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:16.598      result = client.call(request['method'], request.get('params', {}))
00:21:16.598               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.598    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:16.598      response = func(request=json_format.ParseDict(params, input()))
00:21:16.598                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.598    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:16.598      return _end_unary_response_blocking(state, call, False, None)
00:21:16.598             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.598    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:16.598      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:16.598      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.598  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:16.598  	status = StatusCode.NOT_FOUND
00:21:16.598  	details = "No volume associated with volume_id could be found"
00:21:16.598  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"No volume associated with volume_id could be found", grpc_status:5, created_time:"2024-12-06T19:19:47.378975679+01:00"}"
00:21:16.598  >
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:16.598   19:19:47 sma.sma_qos -- sma/qos.sh@205 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.598    19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.598   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.598    19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.599   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.599   19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.599   19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:21:16.599   19:19:47 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.860  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:16.860  I0000 00:00:1733509187.645573  610617 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:16.860  I0000 00:00:1733509187.647635  610617 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:16.860  I0000 00:00:1733509187.649345  610619 subchannel.cc:806] subchannel 0x55e0ba913560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e0ba929f20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e0ba8e06e0, grpc.internal.client_channel_call_destination=0x7f7e3c230390, grpc.internal.event_engine=0x55e0ba90f5b0, grpc.internal.security_connector=0x55e0ba893fb0, grpc.internal.subchannel_pool=0x55e0ba963410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e0ba82da60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:47.648784873+01:00"}), backing off for 1000 ms
00:21:16.860  Traceback (most recent call last):
00:21:16.860    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:16.860      main(sys.argv[1:])
00:21:16.860    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:16.860      result = client.call(request['method'], request.get('params', {}))
00:21:16.860               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.860    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:16.860      response = func(request=json_format.ParseDict(params, input()))
00:21:16.860                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.860    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:16.860      return _end_unary_response_blocking(state, call, False, None)
00:21:16.860             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.860    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:16.860      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:16.860      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:16.860  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:16.860  	status = StatusCode.INVALID_ARGUMENT
00:21:16.860  	details = "Invalid volume ID"
00:21:16.860  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume ID", grpc_status:3, created_time:"2024-12-06T19:19:47.650639065+01:00"}"
00:21:16.860  >
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:16.860   19:19:47 sma.sma_qos -- sma/qos.sh@217 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.860    19:19:47 sma.sma_qos -- sma/qos.sh@217 -- # uuid2base64 9f945741-42f2-4465-8d7d-f64fb5f5a39b
00:21:16.860    19:19:47 sma.sma_qos -- sma/common.sh@20 -- # python
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.860    19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.860    19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:21:16.860   19:19:47 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:21:17.120  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:21:17.120  I0000 00:00:1733509187.952904  610643 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:21:17.120  I0000 00:00:1733509187.954947  610643 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:21:17.120  I0000 00:00:1733509187.956665  610644 subchannel.cc:806] subchannel 0x56542f469560 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56542f47ff20, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56542f4366e0, grpc.internal.client_channel_call_destination=0x7ff8ac475390, grpc.internal.event_engine=0x56542f4655b0, grpc.internal.security_connector=0x56542f465540, grpc.internal.subchannel_pool=0x56542f4b9410, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56542f383a60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-06T19:19:47.956162966+01:00"}), backing off for 999 ms
00:21:17.120  Traceback (most recent call last):
00:21:17.120    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:21:17.120      main(sys.argv[1:])
00:21:17.120    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:21:17.120      result = client.call(request['method'], request.get('params', {}))
00:21:17.120               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:17.120    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:21:17.120      response = func(request=json_format.ParseDict(params, input()))
00:21:17.120                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:17.120    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:21:17.120      return _end_unary_response_blocking(state, call, False, None)
00:21:17.120             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:17.120    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:21:17.120      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:21:17.120      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:21:17.120  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:21:17.120  	status = StatusCode.NOT_FOUND
00:21:17.120  	details = "Invalid device handle"
00:21:17.120  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid device handle", grpc_status:5, created_time:"2024-12-06T19:19:47.957844068+01:00"}"
00:21:17.120  >
00:21:17.120   19:19:47 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:21:17.120   19:19:47 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:17.120   19:19:47 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:17.120   19:19:47 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:17.120   19:19:47 sma.sma_qos -- sma/qos.sh@230 -- # diff /dev/fd/62 /dev/fd/61
00:21:17.120    19:19:47 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys
00:21:17.120    19:19:47 sma.sma_qos -- sma/qos.sh@230 -- # rpc_cmd bdev_get_bdevs -b null0
00:21:17.120    19:19:47 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:17.120    19:19:47 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys '.[].assigned_rate_limits'
00:21:17.120    19:19:47 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:21:17.120    19:19:47 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:17.120   19:19:48 sma.sma_qos -- sma/qos.sh@241 -- # trap - SIGINT SIGTERM EXIT
00:21:17.120   19:19:48 sma.sma_qos -- sma/qos.sh@242 -- # cleanup
00:21:17.120   19:19:48 sma.sma_qos -- sma/qos.sh@19 -- # killprocess 609642
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 609642 ']'
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 609642
00:21:17.120    19:19:48 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:17.120    19:19:48 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 609642
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 609642'
00:21:17.120  killing process with pid 609642
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 609642
00:21:17.120   19:19:48 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 609642
00:21:19.653   19:19:50 sma.sma_qos -- sma/qos.sh@20 -- # killprocess 609643
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 609643 ']'
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 609643
00:21:19.653    19:19:50 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:19.653    19:19:50 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 609643
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=python3
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 609643'
00:21:19.653  killing process with pid 609643
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 609643
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 609643
00:21:19.653  
00:21:19.653  real	0m8.371s
00:21:19.653  user	0m11.437s
00:21:19.653  sys	0m1.279s
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:19.653   19:19:50 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:21:19.653  ************************************
00:21:19.653  END TEST sma_qos
00:21:19.653  ************************************
00:21:19.653  
00:21:19.653  real	3m41.741s
00:21:19.653  user	6m36.636s
00:21:19.653  sys	0m26.371s
00:21:19.653   19:19:50 sma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:19.653   19:19:50 sma -- common/autotest_common.sh@10 -- # set +x
00:21:19.653  ************************************
00:21:19.653  END TEST sma
00:21:19.653  ************************************
00:21:19.653   19:19:50  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:21:19.653   19:19:50  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:21:19.653   19:19:50  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:21:19.653   19:19:50  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:21:19.653   19:19:50  -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:19.653   19:19:50  -- common/autotest_common.sh@10 -- # set +x
00:21:19.653   19:19:50  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:21:19.653   19:19:50  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:21:19.653   19:19:50  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:21:19.653   19:19:50  -- common/autotest_common.sh@10 -- # set +x
00:21:21.560  INFO: APP EXITING
00:21:21.560  INFO: killing all VMs
00:21:21.560  INFO: killing vhost app
00:21:21.560  INFO: EXIT DONE
00:21:22.497  0000:00:04.7 (8086 0e27): Already using the ioatdma driver
00:21:22.497  0000:00:04.6 (8086 0e26): Already using the ioatdma driver
00:21:22.497  0000:00:04.5 (8086 0e25): Already using the ioatdma driver
00:21:22.497  0000:00:04.4 (8086 0e24): Already using the ioatdma driver
00:21:22.497  0000:00:04.3 (8086 0e23): Already using the ioatdma driver
00:21:22.497  0000:00:04.2 (8086 0e22): Already using the ioatdma driver
00:21:22.497  0000:00:04.1 (8086 0e21): Already using the ioatdma driver
00:21:22.497  0000:00:04.0 (8086 0e20): Already using the ioatdma driver
00:21:22.497  0000:0b:00.0 (8086 0a54): Already using the nvme driver
00:21:22.497  0000:80:04.7 (8086 0e27): Already using the ioatdma driver
00:21:22.497  0000:80:04.6 (8086 0e26): Already using the ioatdma driver
00:21:22.497  0000:80:04.5 (8086 0e25): Already using the ioatdma driver
00:21:22.757  0000:80:04.4 (8086 0e24): Already using the ioatdma driver
00:21:22.757  0000:80:04.3 (8086 0e23): Already using the ioatdma driver
00:21:22.757  0000:80:04.2 (8086 0e22): Already using the ioatdma driver
00:21:22.757  0000:80:04.1 (8086 0e21): Already using the ioatdma driver
00:21:22.757  0000:80:04.0 (8086 0e20): Already using the ioatdma driver
00:21:24.139  Cleaning
00:21:24.139  Removing:    /dev/shm/spdk_tgt_trace.pid510511
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid507877
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid508891
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid510511
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid511224
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid512053
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid512463
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid513396
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid513575
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid514149
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid514618
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid515204
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid515690
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid516160
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid516437
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid516601
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid516910
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid517372
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid520133
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid520567
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid520994
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid521136
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid522236
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid522375
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid523708
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid523948
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid524783
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid524931
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid525251
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid525468
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid526537
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid526696
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid527021
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid530287
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid538253
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid545856
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid555736
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid564807
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid565219
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid570670
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid578476
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid582920
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid587394
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid590480
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid590481
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid590482
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid602974
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid605860
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid606174
00:21:24.139  Removing:    /var/run/dpdk/spdk_pid609642
00:21:24.139  Clean
00:21:24.139   19:19:55  -- common/autotest_common.sh@1453 -- # return 0
00:21:24.139   19:19:55  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:21:24.139   19:19:55  -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:24.139   19:19:55  -- common/autotest_common.sh@10 -- # set +x
00:21:24.139   19:19:55  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:21:24.139   19:19:55  -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:24.139   19:19:55  -- common/autotest_common.sh@10 -- # set +x
00:21:24.139   19:19:55  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:21:24.139   19:19:55  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log ]]
00:21:24.139   19:19:55  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log
00:21:24.139   19:19:55  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:21:24.397    19:19:55  -- spdk/autotest.sh@398 -- # hostname
00:21:24.397   19:19:55  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info
00:21:24.397  geninfo: WARNING: invalid characters removed from testname!
00:21:56.501   19:20:24  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:21:57.482   19:20:28  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:22:00.781   19:20:31  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:22:03.349   19:20:34  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:22:06.663   19:20:37  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:22:09.205   19:20:39  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:22:12.498   19:20:42  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:22:12.498   19:20:42  -- spdk/autorun.sh@1 -- $ timing_finish
00:22:12.498   19:20:42  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt ]]
00:22:12.498   19:20:42  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:22:12.498   19:20:42  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:22:12.498   19:20:42  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:22:12.498  + [[ -n 427707 ]]
00:22:12.498  + sudo kill 427707
00:22:12.509  [Pipeline] }
00:22:12.525  [Pipeline] // stage
00:22:12.531  [Pipeline] }
00:22:12.546  [Pipeline] // timeout
00:22:12.552  [Pipeline] }
00:22:12.568  [Pipeline] // catchError
00:22:12.574  [Pipeline] }
00:22:12.590  [Pipeline] // wrap
00:22:12.596  [Pipeline] }
00:22:12.609  [Pipeline] // catchError
00:22:12.619  [Pipeline] stage
00:22:12.622  [Pipeline] { (Epilogue)
00:22:12.637  [Pipeline] catchError
00:22:12.639  [Pipeline] {
00:22:12.651  [Pipeline] echo
00:22:12.653  Cleanup processes
00:22:12.660  [Pipeline] sh
00:22:12.950  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:22:12.950  617326 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:22:12.964  [Pipeline] sh
00:22:13.249  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:22:13.249  ++ grep -v 'sudo pgrep'
00:22:13.249  ++ awk '{print $1}'
00:22:13.249  + sudo kill -9
00:22:13.249  + true
00:22:13.260  [Pipeline] sh
00:22:13.541  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:22:23.521  [Pipeline] sh
00:22:23.811  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:22:23.811  Artifacts sizes are good
00:22:23.827  [Pipeline] archiveArtifacts
00:22:23.835  Archiving artifacts
00:22:23.993  [Pipeline] sh
00:22:24.279  + sudo chown -R sys_sgci: /var/jenkins/workspace/vfio-user-phy-autotest
00:22:24.295  [Pipeline] cleanWs
00:22:24.305  [WS-CLEANUP] Deleting project workspace...
00:22:24.305  [WS-CLEANUP] Deferred wipeout is used...
00:22:24.313  [WS-CLEANUP] done
00:22:24.315  [Pipeline] }
00:22:24.336  [Pipeline] // catchError
00:22:24.349  [Pipeline] sh
00:22:24.722  + logger -p user.info -t JENKINS-CI
00:22:24.730  [Pipeline] }
00:22:24.743  [Pipeline] // stage
00:22:24.747  [Pipeline] }
00:22:24.762  [Pipeline] // node
00:22:24.767  [Pipeline] End of Pipeline
00:22:24.804  Finished: SUCCESS