00:00:00.001  Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 977
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3639
00:00:00.001   originally caused by:
00:00:00.002    Started by timer
00:00:00.095  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.095  The recommended git tool is: git
00:00:00.095  using credential 00000000-0000-0000-0000-000000000002
00:00:00.097   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.127  Fetching changes from the remote Git repository
00:00:00.129   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.167  Using shallow fetch with depth 1
00:00:00.167  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.167   > git --version # timeout=10
00:00:00.219   > git --version # 'git version 2.39.2'
00:00:00.219  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.246  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.246   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:08.052   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:08.065   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:08.076  Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD)
00:00:08.076   > git config core.sparsecheckout # timeout=10
00:00:08.087   > git read-tree -mu HEAD # timeout=10
00:00:08.102   > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5
00:00:08.118  Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd"
00:00:08.118   > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10
00:00:08.202  [Pipeline] Start of Pipeline
00:00:08.214  [Pipeline] library
00:00:08.216  Loading library shm_lib@master
00:00:08.216  Library shm_lib@master is cached. Copying from home.
00:00:08.229  [Pipeline] node
00:00:08.246  Running on WFP17 in /var/jenkins/workspace/vfio-user-phy-autotest
00:00:08.247  [Pipeline] {
00:00:08.255  [Pipeline] catchError
00:00:08.257  [Pipeline] {
00:00:08.266  [Pipeline] wrap
00:00:08.273  [Pipeline] {
00:00:08.280  [Pipeline] stage
00:00:08.281  [Pipeline] { (Prologue)
00:00:08.501  [Pipeline] sh
00:00:08.786  + logger -p user.info -t JENKINS-CI
00:00:08.804  [Pipeline] echo
00:00:08.806  Node: WFP17
00:00:08.815  [Pipeline] sh
00:00:09.120  [Pipeline] setCustomBuildProperty
00:00:09.131  [Pipeline] echo
00:00:09.133  Cleanup processes
00:00:09.138  [Pipeline] sh
00:00:09.424  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:09.424  264615 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:09.440  [Pipeline] sh
00:00:09.733  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:09.733  ++ grep -v 'sudo pgrep'
00:00:09.733  ++ awk '{print $1}'
00:00:09.733  + sudo kill -9
00:00:09.733  + true
00:00:09.750  [Pipeline] cleanWs
00:00:09.761  [WS-CLEANUP] Deleting project workspace...
00:00:09.761  [WS-CLEANUP] Deferred wipeout is used...
00:00:09.769  [WS-CLEANUP] done
00:00:09.773  [Pipeline] setCustomBuildProperty
00:00:09.789  [Pipeline] sh
00:00:10.075  + sudo git config --global --replace-all safe.directory '*'
00:00:10.173  [Pipeline] httpRequest
00:00:10.557  [Pipeline] echo
00:00:10.558  Sorcerer 10.211.164.20 is alive
00:00:10.567  [Pipeline] retry
00:00:10.569  [Pipeline] {
00:00:10.581  [Pipeline] httpRequest
00:00:10.586  HttpMethod: GET
00:00:10.587  URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:10.587  Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:10.591  Response Code: HTTP/1.1 200 OK
00:00:10.592  Success: Status code 200 is in the accepted range: 200,404
00:00:10.592  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:12.512  [Pipeline] }
00:00:12.530  [Pipeline] // retry
00:00:12.537  [Pipeline] sh
00:00:12.829  + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:12.862  [Pipeline] httpRequest
00:00:13.253  [Pipeline] echo
00:00:13.255  Sorcerer 10.211.164.20 is alive
00:00:13.263  [Pipeline] retry
00:00:13.265  [Pipeline] {
00:00:13.279  [Pipeline] httpRequest
00:00:13.284  HttpMethod: GET
00:00:13.284  URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:00:13.285  Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:00:13.305  Response Code: HTTP/1.1 200 OK
00:00:13.305  Success: Status code 200 is in the accepted range: 200,404
00:00:13.306  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:01:52.593  [Pipeline] }
00:01:52.613  [Pipeline] // retry
00:01:52.621  [Pipeline] sh
00:01:52.910  + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:01:55.458  [Pipeline] sh
00:01:55.777  + git -C spdk log --oneline -n5
00:01:55.777  83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process
00:01:55.777  0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort()
00:01:55.777  4bcab9fb9 correct kick for CQ full case
00:01:55.777  8531656d3 test/nvmf: Interrupt test for local pcie nvme device
00:01:55.777  318515b44 nvme/perf: interrupt mode support for pcie controller
00:01:55.794  [Pipeline] withCredentials
00:01:55.804   > git --version # timeout=10
00:01:55.818   > git --version # 'git version 2.39.2'
00:01:55.835  Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS
00:01:55.837  [Pipeline] {
00:01:55.846  [Pipeline] retry
00:01:55.848  [Pipeline] {
00:01:55.862  [Pipeline] sh
00:01:56.145  + git ls-remote http://dpdk.org/git/dpdk-stable v23.11
00:01:56.156  [Pipeline] }
00:01:56.174  [Pipeline] // retry
00:01:56.180  [Pipeline] }
00:01:56.196  [Pipeline] // withCredentials
00:01:56.206  [Pipeline] httpRequest
00:01:56.555  [Pipeline] echo
00:01:56.556  Sorcerer 10.211.164.20 is alive
00:01:56.563  [Pipeline] retry
00:01:56.565  [Pipeline] {
00:01:56.576  [Pipeline] httpRequest
00:01:56.581  HttpMethod: GET
00:01:56.581  URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:56.582  Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:56.590  Response Code: HTTP/1.1 200 OK
00:01:56.590  Success: Status code 200 is in the accepted range: 200,404
00:01:56.591  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:02:08.456  [Pipeline] }
00:02:08.473  [Pipeline] // retry
00:02:08.479  [Pipeline] sh
00:02:08.764  + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:02:10.156  [Pipeline] sh
00:02:10.439  + git -C dpdk log --oneline -n5
00:02:10.439  eeb0605f11 version: 23.11.0
00:02:10.439  238778122a doc: update release notes for 23.11
00:02:10.439  46aa6b3cfc doc: fix description of RSS features
00:02:10.439  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:02:10.439  7e421ae345 devtools: support skipping forbid rule check
00:02:10.449  [Pipeline] }
00:02:10.462  [Pipeline] // stage
00:02:10.470  [Pipeline] stage
00:02:10.472  [Pipeline] { (Prepare)
00:02:10.497  [Pipeline] writeFile
00:02:10.512  [Pipeline] sh
00:02:10.795  + logger -p user.info -t JENKINS-CI
00:02:10.806  [Pipeline] sh
00:02:11.088  + logger -p user.info -t JENKINS-CI
00:02:11.101  [Pipeline] sh
00:02:11.388  + cat autorun-spdk.conf
00:02:11.388  SPDK_RUN_FUNCTIONAL_TEST=1
00:02:11.388  SPDK_TEST_VFIOUSER_QEMU=1
00:02:11.388  SPDK_RUN_ASAN=1
00:02:11.388  SPDK_RUN_UBSAN=1
00:02:11.388  SPDK_TEST_SMA=1
00:02:11.388  SPDK_TEST_NATIVE_DPDK=v23.11
00:02:11.388  SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:02:11.395  RUN_NIGHTLY=1
00:02:11.400  [Pipeline] readFile
00:02:11.419  [Pipeline] copyArtifacts
00:02:14.420  Copied 1 artifact from "qemu-vfio" build number 34
00:02:14.425  [Pipeline] sh
00:02:14.803  + tar xf qemu-vfio.tar.gz
00:02:16.874  [Pipeline] copyArtifacts
00:02:16.921  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:02:16.925  [Pipeline] sh
00:02:17.269  + sudo mkdir -p /var/spdk/dependencies/vhost
00:02:17.346  [Pipeline] sh
00:02:17.774  + cd /var/spdk/dependencies/vhost
00:02:17.774  + md5sum --quiet -c /var/jenkins/workspace/vfio-user-phy-autotest/spdk_test_image.qcow2.gz.md5
00:02:20.435  [Pipeline] withEnv
00:02:20.437  [Pipeline] {
00:02:20.450  [Pipeline] sh
00:02:20.737  + set -ex
00:02:20.737  + [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf ]]
00:02:20.737  + source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:20.737  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:20.737  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:02:20.737  ++ SPDK_RUN_ASAN=1
00:02:20.737  ++ SPDK_RUN_UBSAN=1
00:02:20.737  ++ SPDK_TEST_SMA=1
00:02:20.737  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:02:20.737  ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:02:20.737  ++ RUN_NIGHTLY=1
00:02:20.737  + case $SPDK_TEST_NVMF_NICS in
00:02:20.737  + DRIVERS=
00:02:20.737  + [[ -n '' ]]
00:02:20.737  + exit 0
00:02:20.746  [Pipeline] }
00:02:20.761  [Pipeline] // withEnv
00:02:20.767  [Pipeline] }
00:02:20.781  [Pipeline] // stage
00:02:20.791  [Pipeline] catchError
00:02:20.793  [Pipeline] {
00:02:20.807  [Pipeline] timeout
00:02:20.807  Timeout set to expire in 35 min
00:02:20.809  [Pipeline] {
00:02:20.822  [Pipeline] stage
00:02:20.824  [Pipeline] { (Tests)
00:02:20.838  [Pipeline] sh
00:02:21.122  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/vfio-user-phy-autotest
00:02:21.122  ++ readlink -f /var/jenkins/workspace/vfio-user-phy-autotest
00:02:21.122  + DIR_ROOT=/var/jenkins/workspace/vfio-user-phy-autotest
00:02:21.122  + [[ -n /var/jenkins/workspace/vfio-user-phy-autotest ]]
00:02:21.122  + DIR_SPDK=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:21.122  + DIR_OUTPUT=/var/jenkins/workspace/vfio-user-phy-autotest/output
00:02:21.122  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk ]]
00:02:21.122  + [[ ! -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:02:21.122  + mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/output
00:02:21.122  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:02:21.122  + [[ vfio-user-phy-autotest == pkgdep-* ]]
00:02:21.122  + cd /var/jenkins/workspace/vfio-user-phy-autotest
00:02:21.122  + source /etc/os-release
00:02:21.122  ++ NAME='Fedora Linux'
00:02:21.122  ++ VERSION='39 (Cloud Edition)'
00:02:21.122  ++ ID=fedora
00:02:21.122  ++ VERSION_ID=39
00:02:21.122  ++ VERSION_CODENAME=
00:02:21.122  ++ PLATFORM_ID=platform:f39
00:02:21.122  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:02:21.122  ++ ANSI_COLOR='0;38;2;60;110;180'
00:02:21.122  ++ LOGO=fedora-logo-icon
00:02:21.122  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:02:21.122  ++ HOME_URL=https://fedoraproject.org/
00:02:21.122  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:02:21.122  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:02:21.122  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:02:21.122  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:02:21.122  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:02:21.122  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:02:21.122  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:02:21.122  ++ SUPPORT_END=2024-11-12
00:02:21.122  ++ VARIANT='Cloud Edition'
00:02:21.122  ++ VARIANT_ID=cloud
00:02:21.122  + uname -a
00:02:21.122  Linux spdk-wfp-17 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:02:21.122  + sudo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:02:22.062  Hugepages
00:02:22.062  node     hugesize     free /  total
00:02:22.062  node0   1048576kB        0 /      0
00:02:22.062  node0      2048kB        0 /      0
00:02:22.062  node1   1048576kB        0 /      0
00:02:22.062  node1      2048kB        0 /      0
00:02:22.062  
00:02:22.062  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:02:22.062  I/OAT                     0000:00:04.0    8086   6f20   0       ioatdma          -          -
00:02:22.062  I/OAT                     0000:00:04.1    8086   6f21   0       ioatdma          -          -
00:02:22.062  I/OAT                     0000:00:04.2    8086   6f22   0       ioatdma          -          -
00:02:22.062  I/OAT                     0000:00:04.3    8086   6f23   0       ioatdma          -          -
00:02:22.062  I/OAT                     0000:00:04.4    8086   6f24   0       ioatdma          -          -
00:02:22.062  I/OAT                     0000:00:04.5    8086   6f25   0       ioatdma          -          -
00:02:22.062  I/OAT                     0000:00:04.6    8086   6f26   0       ioatdma          -          -
00:02:22.062  I/OAT                     0000:00:04.7    8086   6f27   0       ioatdma          -          -
00:02:22.062  NVMe                      0000:0d:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:02:22.062  I/OAT                     0000:80:04.0    8086   6f20   1       ioatdma          -          -
00:02:22.062  I/OAT                     0000:80:04.1    8086   6f21   1       ioatdma          -          -
00:02:22.062  I/OAT                     0000:80:04.2    8086   6f22   1       ioatdma          -          -
00:02:22.062  I/OAT                     0000:80:04.3    8086   6f23   1       ioatdma          -          -
00:02:22.062  I/OAT                     0000:80:04.4    8086   6f24   1       ioatdma          -          -
00:02:22.062  I/OAT                     0000:80:04.5    8086   6f25   1       ioatdma          -          -
00:02:22.062  I/OAT                     0000:80:04.6    8086   6f26   1       ioatdma          -          -
00:02:22.062  I/OAT                     0000:80:04.7    8086   6f27   1       ioatdma          -          -
00:02:22.062  + rm -f /tmp/spdk-ld-path
00:02:22.062  + source autorun-spdk.conf
00:02:22.062  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:22.062  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:02:22.062  ++ SPDK_RUN_ASAN=1
00:02:22.062  ++ SPDK_RUN_UBSAN=1
00:02:22.062  ++ SPDK_TEST_SMA=1
00:02:22.062  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:02:22.062  ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:02:22.062  ++ RUN_NIGHTLY=1
00:02:22.062  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:02:22.062  + [[ -n '' ]]
00:02:22.062  + sudo git config --global --add safe.directory /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:22.062  + for M in /var/spdk/build-*-manifest.txt
00:02:22.062  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:02:22.062  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:02:22.062  + for M in /var/spdk/build-*-manifest.txt
00:02:22.062  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:02:22.062  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:02:22.062  + for M in /var/spdk/build-*-manifest.txt
00:02:22.062  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:02:22.062  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:02:22.062  ++ uname
00:02:22.062  + [[ Linux == \L\i\n\u\x ]]
00:02:22.062  + sudo dmesg -T
00:02:22.062  + sudo dmesg --clear
00:02:22.062  + dmesg_pid=265937
00:02:22.062  + sudo dmesg -Tw
00:02:22.062  + [[ Fedora Linux == FreeBSD ]]
00:02:22.062  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:22.062  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:22.062  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:02:22.062  + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:02:22.062  + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:02:22.062  + [[ -x /usr/src/fio-static/fio ]]
00:02:22.062  + export FIO_BIN=/usr/src/fio-static/fio
00:02:22.062  + FIO_BIN=/usr/src/fio-static/fio
00:02:22.062  + [[ /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64 == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\v\f\i\o\-\u\s\e\r\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:02:22.062  ++ ldd /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64
00:02:22.062  + deps='	linux-vdso.so.1 (0x00007ffea63b5000)
00:02:22.062  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007fcb4a32a000)
00:02:22.062  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007fcb4a310000)
00:02:22.062  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007fcb4a2d9000)
00:02:22.062  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007fcb4a280000)
00:02:22.063  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007fcb4a273000)
00:02:22.063  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007fcb4a264000)
00:02:22.063  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007fcb4a08a000)
00:02:22.063  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007fcb4a02a000)
00:02:22.063  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007fcb49ee0000)
00:02:22.063  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007fcb49ec4000)
00:02:22.063  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007fcb49ea2000)
00:02:22.063  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007fcb49e80000)
00:02:22.063  	libbpf.so.0 => not found
00:02:22.063  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007fcb49e3f000)
00:02:22.063  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007fcb49e0a000)
00:02:22.063  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007fcb49e03000)
00:02:22.063  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007fcb49dfb000)
00:02:22.063  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007fcb49db9000)
00:02:22.063  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007fcb49d89000)
00:02:22.063  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007fcb49d84000)
00:02:22.063  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007fcb494c9000)
00:02:22.063  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007fcb49301000)
00:02:22.063  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007fcb49220000)
00:02:22.063  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007fcb491fb000)
00:02:22.063  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007fcb49017000)
00:02:22.063  	/lib64/ld-linux-x86-64.so.2 (0x00007fcb4b48e000)
00:02:22.063  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007fcb4900d000)
00:02:22.063  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007fcb48fe0000)
00:02:22.063  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007fcb48fd6000)
00:02:22.063  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007fcb48fba000)
00:02:22.063  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007fcb48f67000)
00:02:22.063  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007fcb48f3a000)
00:02:22.063  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007fcb48f2a000)
00:02:22.063  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007fcb48e8f000)
00:02:22.063  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007fcb48e6a000)
00:02:22.063  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007fcb48dd2000)
00:02:22.063  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007fcb48c98000)
00:02:22.063  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007fcb48bf5000)
00:02:22.063  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007fcb48b74000)
00:02:22.063  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007fcb47f44000)
00:02:22.063  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007fcb47a6b000)
00:02:22.063  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007fcb47815000)
00:02:22.063  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007fcb47756000)
00:02:22.063  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007fcb47723000)
00:02:22.063  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007fcb476e7000)
00:02:22.063  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007fcb476c1000)
00:02:22.063  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007fcb47662000)
00:02:22.063  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007fcb4765a000)
00:02:22.063  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007fcb47646000)
00:02:22.063  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007fcb47635000)
00:02:22.063  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007fcb47581000)
00:02:22.063  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007fcb474e7000)
00:02:22.063  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007fcb474ba000)
00:02:22.063  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007fcb47498000)
00:02:22.063  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007fcb47425000)
00:02:22.063  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007fcb47411000)
00:02:22.063  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007fcb473bb000)
00:02:22.063  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007fcb47354000)
00:02:22.063  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007fcb47342000)
00:02:22.063  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007fcb47334000)
00:02:22.063  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007fcb47184000)
00:02:22.063  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007fcb470ab000)
00:02:22.063  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007fcb47091000)
00:02:22.063  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007fcb4708a000)
00:02:22.063  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007fcb4707a000)
00:02:22.063  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007fcb47073000)
00:02:22.063  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007fcb4701b000)
00:02:22.063  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007fcb46ffc000)
00:02:22.063  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007fcb46fd7000)
00:02:22.063  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007fcb46f9e000)'
00:02:22.063  + [[ 	linux-vdso.so.1 (0x00007ffea63b5000)
00:02:22.063  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007fcb4a32a000)
00:02:22.063  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007fcb4a310000)
00:02:22.063  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007fcb4a2d9000)
00:02:22.063  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007fcb4a280000)
00:02:22.063  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007fcb4a273000)
00:02:22.063  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007fcb4a264000)
00:02:22.063  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007fcb4a08a000)
00:02:22.063  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007fcb4a02a000)
00:02:22.063  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007fcb49ee0000)
00:02:22.063  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007fcb49ec4000)
00:02:22.063  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007fcb49ea2000)
00:02:22.063  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007fcb49e80000)
00:02:22.063  	libbpf.so.0 => not found
00:02:22.063  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007fcb49e3f000)
00:02:22.063  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007fcb49e0a000)
00:02:22.063  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007fcb49e03000)
00:02:22.063  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007fcb49dfb000)
00:02:22.063  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007fcb49db9000)
00:02:22.063  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007fcb49d89000)
00:02:22.063  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007fcb49d84000)
00:02:22.063  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007fcb494c9000)
00:02:22.063  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007fcb49301000)
00:02:22.063  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007fcb49220000)
00:02:22.063  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007fcb491fb000)
00:02:22.063  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007fcb49017000)
00:02:22.063  	/lib64/ld-linux-x86-64.so.2 (0x00007fcb4b48e000)
00:02:22.063  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007fcb4900d000)
00:02:22.063  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007fcb48fe0000)
00:02:22.063  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007fcb48fd6000)
00:02:22.063  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007fcb48fba000)
00:02:22.063  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007fcb48f67000)
00:02:22.063  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007fcb48f3a000)
00:02:22.063  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007fcb48f2a000)
00:02:22.063  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007fcb48e8f000)
00:02:22.063  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007fcb48e6a000)
00:02:22.063  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007fcb48dd2000)
00:02:22.063  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007fcb48c98000)
00:02:22.063  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007fcb48bf5000)
00:02:22.063  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007fcb48b74000)
00:02:22.063  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007fcb47f44000)
00:02:22.063  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007fcb47a6b000)
00:02:22.063  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007fcb47815000)
00:02:22.063  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007fcb47756000)
00:02:22.063  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007fcb47723000)
00:02:22.063  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007fcb476e7000)
00:02:22.063  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007fcb476c1000)
00:02:22.063  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007fcb47662000)
00:02:22.063  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007fcb4765a000)
00:02:22.063  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007fcb47646000)
00:02:22.063  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007fcb47635000)
00:02:22.063  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007fcb47581000)
00:02:22.063  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007fcb474e7000)
00:02:22.063  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007fcb474ba000)
00:02:22.063  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007fcb47498000)
00:02:22.063  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007fcb47425000)
00:02:22.063  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007fcb47411000)
00:02:22.063  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007fcb473bb000)
00:02:22.063  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007fcb47354000)
00:02:22.063  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007fcb47342000)
00:02:22.063  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007fcb47334000)
00:02:22.063  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007fcb47184000)
00:02:22.063  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007fcb470ab000)
00:02:22.063  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007fcb47091000)
00:02:22.063  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007fcb4708a000)
00:02:22.063  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007fcb4707a000)
00:02:22.063  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007fcb47073000)
00:02:22.063  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007fcb4701b000)
00:02:22.063  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007fcb46ffc000)
00:02:22.063  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007fcb46fd7000)
00:02:22.063  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007fcb46f9e000) == *\n\o\t\ \f\o\u\n\d* ]]
00:02:22.063  + unset -v VFIO_QEMU_BIN
00:02:22.063  + [[ ! -v VFIO_QEMU_BIN ]]
00:02:22.063  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:02:22.063  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:22.063  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:22.063  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:02:22.063  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:22.063  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:22.064  + spdk/autorun.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:22.064    18:23:08  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:22.064   18:23:08  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_VFIOUSER_QEMU=1
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_RUN_ASAN=1
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_RUN_UBSAN=1
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_SMA=1
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_NATIVE_DPDK=v23.11
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:02:22.064    18:23:08  -- vfio-user-phy-autotest/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1
00:02:22.064   18:23:08  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:02:22.064   18:23:08  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:22.064     18:23:08  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:22.064    18:23:08  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:02:22.064     18:23:08  -- scripts/common.sh@15 -- $ shopt -s extglob
00:02:22.064     18:23:08  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:02:22.064     18:23:08  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:22.064     18:23:08  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:22.064      18:23:08  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.064      18:23:08  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.064      18:23:08  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.064      18:23:08  -- paths/export.sh@5 -- $ export PATH
00:02:22.064      18:23:08  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.064    18:23:08  -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:02:22.064      18:23:08  -- common/autobuild_common.sh@486 -- $ date +%s
00:02:22.064     18:23:08  -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731864188.XXXXXX
00:02:22.064    18:23:08  -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731864188.VFCKaY
00:02:22.064    18:23:08  -- common/autobuild_common.sh@488 -- $ [[ -n '' ]]
00:02:22.064    18:23:08  -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']'
00:02:22.064     18:23:08  -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:02:22.064    18:23:08  -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/dpdk'
00:02:22.064    18:23:08  -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp'
00:02:22.064    18:23:08  -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/scan-build-tmp  --exclude /var/jenkins/workspace/vfio-user-phy-autotest/dpdk --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:02:22.064     18:23:08  -- common/autobuild_common.sh@502 -- $ get_config_params
00:02:22.064     18:23:08  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:02:22.064     18:23:08  -- common/autotest_common.sh@10 -- $ set +x
00:02:22.064    18:23:08  -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build --with-sma --with-crypto'
00:02:22.064    18:23:08  -- common/autobuild_common.sh@504 -- $ start_monitor_resources
00:02:22.064    18:23:08  -- pm/common@17 -- $ local monitor
00:02:22.064    18:23:08  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:22.064    18:23:08  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:22.064    18:23:08  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:22.064    18:23:08  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:22.064     18:23:08  -- pm/common@21 -- $ date +%s
00:02:22.064    18:23:08  -- pm/common@25 -- $ sleep 1
00:02:22.064     18:23:08  -- pm/common@21 -- $ date +%s
00:02:22.064     18:23:08  -- pm/common@21 -- $ date +%s
00:02:22.064     18:23:08  -- pm/common@21 -- $ date +%s
00:02:22.064    18:23:08  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864188
00:02:22.064    18:23:08  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864188
00:02:22.064    18:23:08  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864188
00:02:22.064    18:23:08  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864188
00:02:22.064  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864188_collect-vmstat.pm.log
00:02:22.064  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864188_collect-cpu-load.pm.log
00:02:22.064  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864188_collect-cpu-temp.pm.log
00:02:22.064  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864188_collect-bmc-pm.bmc.pm.log
00:02:23.442    18:23:09  -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT
00:02:23.442   18:23:09  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:02:23.442   18:23:09  -- spdk/autobuild.sh@12 -- $ umask 022
00:02:23.442   18:23:09  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:23.442   18:23:09  -- spdk/autobuild.sh@16 -- $ date -u
00:02:23.442  Sun Nov 17 05:23:09 PM UTC 2024
00:02:23.442   18:23:09  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:02:23.442  v25.01-pre-189-g83e8405e4
00:02:23.442   18:23:09  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:02:23.442   18:23:09  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:02:23.442   18:23:09  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:23.442   18:23:09  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:23.442   18:23:09  -- common/autotest_common.sh@10 -- $ set +x
00:02:23.442  ************************************
00:02:23.442  START TEST asan
00:02:23.442  ************************************
00:02:23.442   18:23:09 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:02:23.442  using asan
00:02:23.442  
00:02:23.442  real	0m0.000s
00:02:23.442  user	0m0.000s
00:02:23.442  sys	0m0.000s
00:02:23.442   18:23:09 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:23.442   18:23:09 asan -- common/autotest_common.sh@10 -- $ set +x
00:02:23.442  ************************************
00:02:23.442  END TEST asan
00:02:23.442  ************************************
00:02:23.442   18:23:09  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:02:23.442   18:23:09  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:02:23.442   18:23:09  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:23.442   18:23:09  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:23.443   18:23:09  -- common/autotest_common.sh@10 -- $ set +x
00:02:23.443  ************************************
00:02:23.443  START TEST ubsan
00:02:23.443  ************************************
00:02:23.443   18:23:09 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:02:23.443  using ubsan
00:02:23.443  
00:02:23.443  real	0m0.000s
00:02:23.443  user	0m0.000s
00:02:23.443  sys	0m0.000s
00:02:23.443   18:23:09 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:23.443   18:23:09 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:02:23.443  ************************************
00:02:23.443  END TEST ubsan
00:02:23.443  ************************************
00:02:23.443   18:23:09  -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']'
00:02:23.443   18:23:09  -- spdk/autobuild.sh@28 -- $ build_native_dpdk
00:02:23.443   18:23:09  -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk
00:02:23.443   18:23:09  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:02:23.443   18:23:09  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:23.443   18:23:09  -- common/autotest_common.sh@10 -- $ set +x
00:02:23.443  ************************************
00:02:23.443  START TEST build_native_dpdk
00:02:23.443  ************************************
00:02:23.443   18:23:09 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]]
00:02:23.443    18:23:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:02:23.443    18:23:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/vfio-user-phy-autotest/dpdk ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/vfio-user-phy-autotest/dpdk log --oneline -n 5
00:02:23.443  eeb0605f11 version: 23.11.0
00:02:23.443  238778122a doc: update release notes for 23.11
00:02:23.443  46aa6b3cfc doc: fix description of RSS features
00:02:23.443  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:02:23.443  7e421ae345 devtools: support skipping forbid rule check
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon'
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags=
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror'
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow'
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base")
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 1 -eq 1 ]]
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@104 -- $ intel_ipsec_mb_ver=v0.54
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@105 -- $ intel_ipsec_mb_drv=crypto/aesni_mb
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@106 -- $ intel_ipsec_lib=
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@107 -- $ ge 23.11.0 21.11.0
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 21.11.0
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>='
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@348 -- $ : 1
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:02:23.443    18:23:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:02:23.443   18:23:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 0
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@112 -- $ intel_ipsec_mb_ver=v1.0
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@113 -- $ intel_ipsec_mb_drv=crypto/ipsec_mb
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@114 -- $ intel_ipsec_lib=lib
00:02:23.443   18:23:09 build_native_dpdk -- common/autobuild_common.sh@116 -- $ git clone --branch v1.0 --depth 1 https://github.com/intel/intel-ipsec-mb.git /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb
00:02:23.443  Cloning into '/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb'...
00:02:24.381  Note: switching to 'a1a289dabb23be78d6531de481ba6a417c67b0a5'.
00:02:24.381  
00:02:24.381  You are in 'detached HEAD' state. You can look around, make experimental
00:02:24.381  changes and commit them, and you can discard any commits you make in this
00:02:24.381  state without impacting any branches by switching back to a branch.
00:02:24.381  
00:02:24.381  If you want to create a new branch to retain commits you create, you may
00:02:24.381  do so (now or later) by using -c with the switch command. Example:
00:02:24.381  
00:02:24.381    git switch -c <new-branch-name>
00:02:24.381  
00:02:24.381  Or undo this operation with:
00:02:24.381  
00:02:24.381    git switch -
00:02:24.381  
00:02:24.381  Turn off this advice by setting config variable advice.detachedHead to false
00:02:24.381  
00:02:24.381   18:23:10 build_native_dpdk -- common/autobuild_common.sh@117 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb
00:02:24.381   18:23:10 build_native_dpdk -- common/autobuild_common.sh@118 -- $ make -j88 all SHARED=y EXTRA_CFLAGS=-fPIC
00:02:24.381  make -C lib
00:02:24.381  make[1]: Entering directory '/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib'
00:02:24.639  mkdir obj
00:02:24.640  nasm -MD obj/aes_keyexp_128.d -MT obj/aes_keyexp_128.o -o obj/aes_keyexp_128.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_128.asm
00:02:24.640  nasm -MD obj/aes_keyexp_192.d -MT obj/aes_keyexp_192.o -o obj/aes_keyexp_192.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_192.asm
00:02:24.640  nasm -MD obj/aes_keyexp_256.d -MT obj/aes_keyexp_256.o -o obj/aes_keyexp_256.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_256.asm
00:02:24.640  nasm -MD obj/aes_cmac_subkey_gen.d -MT obj/aes_cmac_subkey_gen.o -o obj/aes_cmac_subkey_gen.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_cmac_subkey_gen.asm
00:02:24.640  nasm -MD obj/save_xmms.d -MT obj/save_xmms.o -o obj/save_xmms.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/save_xmms.asm
00:02:24.640  nasm -MD obj/clear_regs_mem_fns.d -MT obj/clear_regs_mem_fns.o -o obj/clear_regs_mem_fns.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/clear_regs_mem_fns.asm
00:02:24.640  nasm -MD obj/const.d -MT obj/const.o -o obj/const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/const.asm
00:02:24.640  nasm -MD obj/aes128_ecbenc_x3.d -MT obj/aes128_ecbenc_x3.o -o obj/aes128_ecbenc_x3.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes128_ecbenc_x3.asm
00:02:24.640  nasm -MD obj/zuc_common.d -MT obj/zuc_common.o -o obj/zuc_common.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/zuc_common.asm
00:02:24.640  nasm -MD obj/wireless_common.d -MT obj/wireless_common.o -o obj/wireless_common.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/wireless_common.asm
00:02:24.640  nasm -MD obj/constant_lookup.d -MT obj/constant_lookup.o -o obj/constant_lookup.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/constant_lookup.asm
00:02:24.640  nasm -MD obj/crc32_refl_const.d -MT obj/crc32_refl_const.o -o obj/crc32_refl_const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/crc32_refl_const.asm
00:02:24.640  nasm -MD obj/crc32_const.d -MT obj/crc32_const.o -o obj/crc32_const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/crc32_const.asm
00:02:24.640  ld -r -z ibt -z shstk -o obj/save_xmms.o.tmp obj/save_xmms.o
00:02:24.640  ld -r -z ibt -z shstk -o obj/clear_regs_mem_fns.o.tmp obj/clear_regs_mem_fns.o
00:02:24.640  ld -r -z ibt -z shstk -o obj/const.o.tmp obj/const.o
00:02:24.640  nasm -MD obj/poly1305.d -MT obj/poly1305.o -o obj/poly1305.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/poly1305.asm
00:02:24.640  ld -r -z ibt -z shstk -o obj/wireless_common.o.tmp obj/wireless_common.o
00:02:24.640  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/chacha20_poly1305.c -o obj/chacha20_poly1305.o
00:02:24.640  ld -r -z ibt -z shstk -o obj/crc32_refl_const.o.tmp obj/crc32_refl_const.o
00:02:24.640  ld -r -z ibt -z shstk -o obj/crc32_const.o.tmp obj/crc32_const.o
00:02:24.640  mv obj/save_xmms.o.tmp obj/save_xmms.o
00:02:24.640  mv obj/clear_regs_mem_fns.o.tmp obj/clear_regs_mem_fns.o
00:02:24.640  mv obj/const.o.tmp obj/const.o
00:02:24.640  mv obj/wireless_common.o.tmp obj/wireless_common.o
00:02:24.640  nasm -MD obj/aes128_cbc_dec_by4_sse_no_aesni.d -MT obj/aes128_cbc_dec_by4_sse_no_aesni.o -o obj/aes128_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbc_dec_by4_sse_no_aesni.asm
00:02:24.640  mv obj/crc32_const.o.tmp obj/crc32_const.o
00:02:24.640  nasm -MD obj/aes192_cbc_dec_by4_sse_no_aesni.d -MT obj/aes192_cbc_dec_by4_sse_no_aesni.o -o obj/aes192_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes192_cbc_dec_by4_sse_no_aesni.asm
00:02:24.640  mv obj/crc32_refl_const.o.tmp obj/crc32_refl_const.o
00:02:24.640  nasm -MD obj/aes256_cbc_dec_by4_sse_no_aesni.d -MT obj/aes256_cbc_dec_by4_sse_no_aesni.o -o obj/aes256_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cbc_dec_by4_sse_no_aesni.asm
00:02:24.640  nasm -MD obj/aes_cbc_enc_128_x4_no_aesni.d -MT obj/aes_cbc_enc_128_x4_no_aesni.o -o obj/aes_cbc_enc_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_128_x4_no_aesni.asm
00:02:24.640  nasm -MD obj/aes_cbc_enc_192_x4_no_aesni.d -MT obj/aes_cbc_enc_192_x4_no_aesni.o -o obj/aes_cbc_enc_192_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_192_x4_no_aesni.asm
00:02:24.640  ld -r -z ibt -z shstk -o obj/constant_lookup.o.tmp obj/constant_lookup.o
00:02:24.640  nasm -MD obj/aes_cbc_enc_256_x4_no_aesni.d -MT obj/aes_cbc_enc_256_x4_no_aesni.o -o obj/aes_cbc_enc_256_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_256_x4_no_aesni.asm
00:02:24.640  nasm -MD obj/aes128_cntr_by8_sse_no_aesni.d -MT obj/aes128_cntr_by8_sse_no_aesni.o -o obj/aes128_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cntr_by8_sse_no_aesni.asm
00:02:24.640  nasm -MD obj/aes192_cntr_by8_sse_no_aesni.d -MT obj/aes192_cntr_by8_sse_no_aesni.o -o obj/aes192_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes192_cntr_by8_sse_no_aesni.asm
00:02:24.640  mv obj/constant_lookup.o.tmp obj/constant_lookup.o
00:02:24.640  nasm -MD obj/aes256_cntr_by8_sse_no_aesni.d -MT obj/aes256_cntr_by8_sse_no_aesni.o -o obj/aes256_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cntr_by8_sse_no_aesni.asm
00:02:24.640  nasm -MD obj/aes_ecb_by4_sse_no_aesni.d -MT obj/aes_ecb_by4_sse_no_aesni.o -o obj/aes_ecb_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_ecb_by4_sse_no_aesni.asm
00:02:24.640  nasm -MD obj/aes128_cntr_ccm_by8_sse_no_aesni.d -MT obj/aes128_cntr_ccm_by8_sse_no_aesni.o -o obj/aes128_cntr_ccm_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cntr_ccm_by8_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/aes256_cntr_ccm_by8_sse_no_aesni.d -MT obj/aes256_cntr_ccm_by8_sse_no_aesni.o -o obj/aes256_cntr_ccm_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cntr_ccm_by8_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/pon_sse_no_aesni.d -MT obj/pon_sse_no_aesni.o -o obj/pon_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/pon_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/zuc_sse_no_aesni.d -MT obj/zuc_sse_no_aesni.o -o obj/zuc_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/zuc_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/aes_cfb_sse_no_aesni.d -MT obj/aes_cfb_sse_no_aesni.o -o obj/aes_cfb_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cfb_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/aes128_cbc_mac_x4_no_aesni.d -MT obj/aes128_cbc_mac_x4_no_aesni.o -o obj/aes128_cbc_mac_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbc_mac_x4_no_aesni.asm
00:02:24.902  nasm -MD obj/aes256_cbc_mac_x4_no_aesni.d -MT obj/aes256_cbc_mac_x4_no_aesni.o -o obj/aes256_cbc_mac_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cbc_mac_x4_no_aesni.asm
00:02:24.902  nasm -MD obj/aes_xcbc_mac_128_x4_no_aesni.d -MT obj/aes_xcbc_mac_128_x4_no_aesni.o -o obj/aes_xcbc_mac_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_xcbc_mac_128_x4_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_flush_sse_no_aesni.o -o obj/mb_mgr_aes_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes_submit_sse_no_aesni.d -MT obj/mb_mgr_aes_submit_sse_no_aesni.o -o obj/mb_mgr_aes_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_submit_sse_no_aesni.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/poly1305.o.tmp obj/poly1305.o
00:02:24.902  nasm -MD obj/mb_mgr_aes192_flush_sse_no_aesni.d -MT obj/mb_mgr_aes192_flush_sse_no_aesni.o -o obj/mb_mgr_aes192_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes192_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes192_submit_sse_no_aesni.d -MT obj/mb_mgr_aes192_submit_sse_no_aesni.o -o obj/mb_mgr_aes192_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes192_submit_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes256_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_flush_sse_no_aesni.asm
00:02:24.902  mv obj/poly1305.o.tmp obj/poly1305.o
00:02:24.902  nasm -MD obj/mb_mgr_aes256_submit_sse_no_aesni.d -MT obj/mb_mgr_aes256_submit_sse_no_aesni.o -o obj/mb_mgr_aes256_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_submit_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o -o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_xcbc_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.d -MT obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o -o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_xcbc_submit_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_zuc_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o -o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_zuc_submit_flush_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/ethernet_fcs_sse_no_aesni.d -MT obj/ethernet_fcs_sse_no_aesni.o -o obj/ethernet_fcs_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/ethernet_fcs_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/crc16_x25_sse_no_aesni.d -MT obj/crc16_x25_sse_no_aesni.o -o obj/crc16_x25_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc16_x25_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/aes_cbcs_1_9_enc_128_x4_no_aesni.d -MT obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o -o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbcs_1_9_enc_128_x4_no_aesni.asm
00:02:24.902  nasm -MD obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.d -MT obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o -o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbcs_1_9_dec_by4_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_sse.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes128_cbcs_1_9_submit_sse.asm
00:02:24.902  nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_sse.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes128_cbcs_1_9_flush_sse.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/ethernet_fcs_sse_no_aesni.o.tmp obj/ethernet_fcs_sse_no_aesni.o
00:02:24.902  ld -r -z ibt -z shstk -o obj/crc16_x25_sse_no_aesni.o.tmp obj/crc16_x25_sse_no_aesni.o
00:02:24.902  nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.asm
00:02:24.902  mv obj/ethernet_fcs_sse_no_aesni.o.tmp obj/ethernet_fcs_sse_no_aesni.o
00:02:24.902  nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.asm
00:02:24.902  mv obj/crc16_x25_sse_no_aesni.o.tmp obj/crc16_x25_sse_no_aesni.o
00:02:24.902  nasm -MD obj/crc32_refl_by8_sse_no_aesni.d -MT obj/crc32_refl_by8_sse_no_aesni.o -o obj/crc32_refl_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_refl_by8_sse_no_aesni.asm
00:02:24.902  nasm -MD obj/crc32_by8_sse_no_aesni.d -MT obj/crc32_by8_sse_no_aesni.o -o obj/crc32_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_by8_sse_no_aesni.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_submit_sse_no_aesni.o
00:02:24.902  nasm -MD obj/crc32_sctp_sse_no_aesni.d -MT obj/crc32_sctp_sse_no_aesni.o -o obj/crc32_sctp_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_sctp_sse_no_aesni.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_flush_sse_no_aesni.o
00:02:24.902  nasm -MD obj/crc32_lte_sse_no_aesni.d -MT obj/crc32_lte_sse_no_aesni.o -o obj/crc32_lte_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_lte_sse_no_aesni.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes256_submit_sse_no_aesni.o
00:02:24.902  mv obj/mb_mgr_aes_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_submit_sse_no_aesni.o
00:02:24.902  mv obj/mb_mgr_aes_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_flush_sse_no_aesni.o
00:02:24.902  nasm -MD obj/crc32_fp_sse_no_aesni.d -MT obj/crc32_fp_sse_no_aesni.o -o obj/crc32_fp_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_fp_sse_no_aesni.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes192_flush_sse_no_aesni.o
00:02:24.902  nasm -MD obj/crc32_iuup_sse_no_aesni.d -MT obj/crc32_iuup_sse_no_aesni.o -o obj/crc32_iuup_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_iuup_sse_no_aesni.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_flush_sse_no_aesni.o
00:02:24.902  ld -r -z ibt -z shstk -o obj/crc32_sctp_sse_no_aesni.o.tmp obj/crc32_sctp_sse_no_aesni.o
00:02:24.902  nasm -MD obj/crc32_wimax_sse_no_aesni.d -MT obj/crc32_wimax_sse_no_aesni.o -o obj/crc32_wimax_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_wimax_sse_no_aesni.asm
00:02:24.902  ld -r -z ibt -z shstk -o obj/crc32_lte_sse_no_aesni.o.tmp obj/crc32_lte_sse_no_aesni.o
00:02:24.902  mv obj/mb_mgr_aes256_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes256_submit_sse_no_aesni.o
00:02:24.902  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes192_submit_sse_no_aesni.o
00:02:24.902  mv obj/mb_mgr_aes192_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes192_flush_sse_no_aesni.o
00:02:24.902  mv obj/mb_mgr_aes256_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_flush_sse_no_aesni.o
00:02:24.902  nasm -MD obj/gcm128_sse_no_aesni.d -MT obj/gcm128_sse_no_aesni.o -o obj/gcm128_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm128_sse_no_aesni.asm
00:02:24.902  mv obj/crc32_sctp_sse_no_aesni.o.tmp obj/crc32_sctp_sse_no_aesni.o
00:02:24.902  mv obj/crc32_lte_sse_no_aesni.o.tmp obj/crc32_lte_sse_no_aesni.o
00:02:24.903  mv obj/mb_mgr_aes192_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes192_submit_sse_no_aesni.o
00:02:24.903  nasm -MD obj/gcm192_sse_no_aesni.d -MT obj/gcm192_sse_no_aesni.o -o obj/gcm192_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm192_sse_no_aesni.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/crc32_iuup_sse_no_aesni.o.tmp obj/crc32_iuup_sse_no_aesni.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o
00:02:24.903  nasm -MD obj/gcm256_sse_no_aesni.d -MT obj/gcm256_sse_no_aesni.o -o obj/gcm256_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm256_sse_no_aesni.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o
00:02:24.903  nasm -MD obj/aes128_cbc_dec_by4_sse.d -MT obj/aes128_cbc_dec_by4_sse.o -o obj/aes128_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_dec_by4_sse.asm
00:02:24.903  mv obj/crc32_iuup_sse_no_aesni.o.tmp obj/crc32_iuup_sse_no_aesni.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/crc32_fp_sse_no_aesni.o.tmp obj/crc32_fp_sse_no_aesni.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/crc32_wimax_sse_no_aesni.o.tmp obj/crc32_wimax_sse_no_aesni.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o
00:02:24.903  mv obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o
00:02:24.903  nasm -MD obj/aes128_cbc_dec_by8_sse.d -MT obj/aes128_cbc_dec_by8_sse.o -o obj/aes128_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_dec_by8_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o
00:02:24.903  mv obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o
00:02:24.903  mv obj/crc32_fp_sse_no_aesni.o.tmp obj/crc32_fp_sse_no_aesni.o
00:02:24.903  mv obj/crc32_wimax_sse_no_aesni.o.tmp obj/crc32_wimax_sse_no_aesni.o
00:02:24.903  mv obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o
00:02:24.903  nasm -MD obj/aes192_cbc_dec_by4_sse.d -MT obj/aes192_cbc_dec_by4_sse.o -o obj/aes192_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cbc_dec_by4_sse.asm
00:02:24.903  mv obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o
00:02:24.903  nasm -MD obj/aes192_cbc_dec_by8_sse.d -MT obj/aes192_cbc_dec_by8_sse.o -o obj/aes192_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cbc_dec_by8_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by4_sse.o.tmp obj/aes128_cbc_dec_by4_sse.o
00:02:24.903  nasm -MD obj/aes256_cbc_dec_by4_sse.d -MT obj/aes256_cbc_dec_by4_sse.o -o obj/aes256_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_dec_by4_sse.asm
00:02:24.903  nasm -MD obj/aes256_cbc_dec_by8_sse.d -MT obj/aes256_cbc_dec_by8_sse.o -o obj/aes256_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_dec_by8_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o
00:02:24.903  mv obj/aes128_cbc_dec_by4_sse.o.tmp obj/aes128_cbc_dec_by4_sse.o
00:02:24.903  nasm -MD obj/aes_cbc_enc_128_x4.d -MT obj/aes_cbc_enc_128_x4.o -o obj/aes_cbc_enc_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_128_x4.asm
00:02:24.903  mv obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o
00:02:24.903  nasm -MD obj/aes_cbc_enc_192_x4.d -MT obj/aes_cbc_enc_192_x4.o -o obj/aes_cbc_enc_192_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_192_x4.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_keyexp_128.o.tmp obj/aes_keyexp_128.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_keyexp_192.o.tmp obj/aes_keyexp_192.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cmac_subkey_gen.o.tmp obj/aes_cmac_subkey_gen.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by8_sse.o.tmp obj/aes128_cbc_dec_by8_sse.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by4_sse.o.tmp obj/aes192_cbc_dec_by4_sse.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by8_sse.o.tmp obj/aes192_cbc_dec_by8_sse.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by4_sse.o.tmp obj/aes256_cbc_dec_by4_sse.o
00:02:24.903  mv obj/aes_keyexp_128.o.tmp obj/aes_keyexp_128.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by8_sse.o.tmp obj/aes256_cbc_dec_by8_sse.o
00:02:24.903  mv obj/aes_keyexp_192.o.tmp obj/aes_keyexp_192.o
00:02:24.903  mv obj/aes_cmac_subkey_gen.o.tmp obj/aes_cmac_subkey_gen.o
00:02:24.903  mv obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o
00:02:24.903  mv obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o
00:02:24.903  mv obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o
00:02:24.903  mv obj/aes128_cbc_dec_by8_sse.o.tmp obj/aes128_cbc_dec_by8_sse.o
00:02:24.903  mv obj/aes192_cbc_dec_by4_sse.o.tmp obj/aes192_cbc_dec_by4_sse.o
00:02:24.903  mv obj/aes192_cbc_dec_by8_sse.o.tmp obj/aes192_cbc_dec_by8_sse.o
00:02:24.903  mv obj/aes256_cbc_dec_by4_sse.o.tmp obj/aes256_cbc_dec_by4_sse.o
00:02:24.903  nasm -MD obj/aes_cbc_enc_256_x4.d -MT obj/aes_cbc_enc_256_x4.o -o obj/aes_cbc_enc_256_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_256_x4.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes128_ecbenc_x3.o.tmp obj/aes128_ecbenc_x3.o
00:02:24.903  mv obj/aes256_cbc_dec_by8_sse.o.tmp obj/aes256_cbc_dec_by8_sse.o
00:02:24.903  nasm -MD obj/aes_cbc_enc_128_x8_sse.d -MT obj/aes_cbc_enc_128_x8_sse.o -o obj/aes_cbc_enc_128_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_128_x8_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x4.o.tmp obj/aes_cbc_enc_192_x4.o
00:02:24.903  nasm -MD obj/aes_cbc_enc_192_x8_sse.d -MT obj/aes_cbc_enc_192_x8_sse.o -o obj/aes_cbc_enc_192_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_192_x8_sse.asm
00:02:24.903  nasm -MD obj/aes_cbc_enc_256_x8_sse.d -MT obj/aes_cbc_enc_256_x8_sse.o -o obj/aes_cbc_enc_256_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_256_x8_sse.asm
00:02:24.903  mv obj/aes128_ecbenc_x3.o.tmp obj/aes128_ecbenc_x3.o
00:02:24.903  nasm -MD obj/pon_sse.d -MT obj/pon_sse.o -o obj/pon_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/pon_sse.asm
00:02:24.903  mv obj/aes_cbc_enc_192_x4.o.tmp obj/aes_cbc_enc_192_x4.o
00:02:24.903  nasm -MD obj/aes128_cntr_by8_sse.d -MT obj/aes128_cntr_by8_sse.o -o obj/aes128_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cntr_by8_sse.asm
00:02:24.903  nasm -MD obj/aes192_cntr_by8_sse.d -MT obj/aes192_cntr_by8_sse.o -o obj/aes192_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cntr_by8_sse.asm
00:02:24.903  nasm -MD obj/aes256_cntr_by8_sse.d -MT obj/aes256_cntr_by8_sse.o -o obj/aes256_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cntr_by8_sse.asm
00:02:24.903  nasm -MD obj/aes_ecb_by4_sse.d -MT obj/aes_ecb_by4_sse.o -o obj/aes_ecb_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_ecb_by4_sse.asm
00:02:24.903  nasm -MD obj/aes128_cntr_ccm_by8_sse.d -MT obj/aes128_cntr_ccm_by8_sse.o -o obj/aes128_cntr_ccm_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cntr_ccm_by8_sse.asm
00:02:24.903  nasm -MD obj/aes256_cntr_ccm_by8_sse.d -MT obj/aes256_cntr_ccm_by8_sse.o -o obj/aes256_cntr_ccm_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cntr_ccm_by8_sse.asm
00:02:24.903  nasm -MD obj/aes_cfb_sse.d -MT obj/aes_cfb_sse.o -o obj/aes_cfb_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cfb_sse.asm
00:02:24.903  nasm -MD obj/aes128_cbc_mac_x4.d -MT obj/aes128_cbc_mac_x4.o -o obj/aes128_cbc_mac_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_mac_x4.asm
00:02:24.903  nasm -MD obj/aes256_cbc_mac_x4.d -MT obj/aes256_cbc_mac_x4.o -o obj/aes256_cbc_mac_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_mac_x4.asm
00:02:24.903  nasm -MD obj/aes128_cbc_mac_x8_sse.d -MT obj/aes128_cbc_mac_x8_sse.o -o obj/aes128_cbc_mac_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_mac_x8_sse.asm
00:02:24.903  nasm -MD obj/aes256_cbc_mac_x8_sse.d -MT obj/aes256_cbc_mac_x8_sse.o -o obj/aes256_cbc_mac_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_mac_x8_sse.asm
00:02:24.903  nasm -MD obj/aes_xcbc_mac_128_x4.d -MT obj/aes_xcbc_mac_128_x4.o -o obj/aes_xcbc_mac_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_xcbc_mac_128_x4.asm
00:02:24.903  nasm -MD obj/md5_x4x2_sse.d -MT obj/md5_x4x2_sse.o -o obj/md5_x4x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/md5_x4x2_sse.asm
00:02:24.903  nasm -MD obj/sha1_mult_sse.d -MT obj/sha1_mult_sse.o -o obj/sha1_mult_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_mult_sse.asm
00:02:24.903  nasm -MD obj/sha1_one_block_sse.d -MT obj/sha1_one_block_sse.o -o obj/sha1_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_one_block_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x4.o.tmp obj/aes_cbc_enc_128_x4.o
00:02:24.903  nasm -MD obj/sha224_one_block_sse.d -MT obj/sha224_one_block_sse.o -o obj/sha224_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha224_one_block_sse.asm
00:02:24.903  nasm -MD obj/sha256_one_block_sse.d -MT obj/sha256_one_block_sse.o -o obj/sha256_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha256_one_block_sse.asm
00:02:24.903  nasm -MD obj/sha384_one_block_sse.d -MT obj/sha384_one_block_sse.o -o obj/sha384_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha384_one_block_sse.asm
00:02:24.903  nasm -MD obj/sha512_one_block_sse.d -MT obj/sha512_one_block_sse.o -o obj/sha512_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha512_one_block_sse.asm
00:02:24.903  mv obj/aes_cbc_enc_128_x4.o.tmp obj/aes_cbc_enc_128_x4.o
00:02:24.903  nasm -MD obj/sha512_x2_sse.d -MT obj/sha512_x2_sse.o -o obj/sha512_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha512_x2_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x4.o.tmp obj/aes_cbc_enc_256_x4.o
00:02:24.903  nasm -MD obj/sha_256_mult_sse.d -MT obj/sha_256_mult_sse.o -o obj/sha_256_mult_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha_256_mult_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o
00:02:24.903  nasm -MD obj/sha1_ni_x2_sse.d -MT obj/sha1_ni_x2_sse.o -o obj/sha1_ni_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_ni_x2_sse.asm
00:02:24.903  nasm -MD obj/sha256_ni_x2_sse.d -MT obj/sha256_ni_x2_sse.o -o obj/sha256_ni_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha256_ni_x2_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x8_sse.o.tmp obj/aes_cbc_enc_128_x8_sse.o
00:02:24.903  nasm -MD obj/zuc_sse.d -MT obj/zuc_sse.o -o obj/zuc_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/zuc_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x8_sse.o.tmp obj/aes_cbc_enc_256_x8_sse.o
00:02:24.903  mv obj/aes_cbc_enc_256_x4.o.tmp obj/aes_cbc_enc_256_x4.o
00:02:24.903  mv obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o
00:02:24.903  nasm -MD obj/zuc_sse_gfni.d -MT obj/zuc_sse_gfni.o -o obj/zuc_sse_gfni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/zuc_sse_gfni.asm
00:02:24.903  nasm -MD obj/mb_mgr_aes_flush_sse.d -MT obj/mb_mgr_aes_flush_sse.o -o obj/mb_mgr_aes_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_flush_sse.asm
00:02:24.903  mv obj/aes_cbc_enc_128_x8_sse.o.tmp obj/aes_cbc_enc_128_x8_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes_submit_sse.d -MT obj/mb_mgr_aes_submit_sse.o -o obj/mb_mgr_aes_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_submit_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x8_sse.o.tmp obj/aes_cbc_enc_192_x8_sse.o
00:02:24.903  mv obj/aes_cbc_enc_256_x8_sse.o.tmp obj/aes_cbc_enc_256_x8_sse.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x4.o.tmp obj/aes128_cbc_mac_x4.o
00:02:24.903  nasm -MD obj/mb_mgr_aes192_flush_sse.d -MT obj/mb_mgr_aes192_flush_sse.o -o obj/mb_mgr_aes192_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_flush_sse.asm
00:02:24.903  nasm -MD obj/mb_mgr_aes192_submit_sse.d -MT obj/mb_mgr_aes192_submit_sse.o -o obj/mb_mgr_aes192_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_submit_sse.asm
00:02:24.903  nasm -MD obj/mb_mgr_aes256_flush_sse.d -MT obj/mb_mgr_aes256_flush_sse.o -o obj/mb_mgr_aes256_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_flush_sse.asm
00:02:24.903  mv obj/aes_cbc_enc_192_x8_sse.o.tmp obj/aes_cbc_enc_192_x8_sse.o
00:02:24.903  mv obj/aes128_cbc_mac_x4.o.tmp obj/aes128_cbc_mac_x4.o
00:02:24.903  nasm -MD obj/mb_mgr_aes256_submit_sse.d -MT obj/mb_mgr_aes256_submit_sse.o -o obj/mb_mgr_aes256_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_submit_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cfb_sse.o.tmp obj/aes_cfb_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes_flush_sse_x8.d -MT obj/mb_mgr_aes_flush_sse_x8.o -o obj/mb_mgr_aes_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_flush_sse_x8.asm
00:02:24.903  nasm -MD obj/mb_mgr_aes_submit_sse_x8.d -MT obj/mb_mgr_aes_submit_sse_x8.o -o obj/mb_mgr_aes_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_submit_sse_x8.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/sha224_one_block_sse.o.tmp obj/sha224_one_block_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes192_flush_sse_x8.d -MT obj/mb_mgr_aes192_flush_sse_x8.o -o obj/mb_mgr_aes192_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_flush_sse_x8.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/sha256_one_block_sse.o.tmp obj/sha256_one_block_sse.o
00:02:24.903  mv obj/aes_cfb_sse.o.tmp obj/aes_cfb_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes192_submit_sse_x8.d -MT obj/mb_mgr_aes192_submit_sse_x8.o -o obj/mb_mgr_aes192_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_submit_sse_x8.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/sha384_one_block_sse.o.tmp obj/sha384_one_block_sse.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/sha512_one_block_sse.o.tmp obj/sha512_one_block_sse.o
00:02:24.903  mv obj/sha224_one_block_sse.o.tmp obj/sha224_one_block_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes256_flush_sse_x8.d -MT obj/mb_mgr_aes256_flush_sse_x8.o -o obj/mb_mgr_aes256_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_flush_sse_x8.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x4.o.tmp obj/aes256_cbc_mac_x4.o
00:02:24.903  mv obj/sha256_one_block_sse.o.tmp obj/sha256_one_block_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes256_submit_sse_x8.d -MT obj/mb_mgr_aes256_submit_sse_x8.o -o obj/mb_mgr_aes256_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_submit_sse_x8.asm
00:02:24.903  mv obj/sha384_one_block_sse.o.tmp obj/sha384_one_block_sse.o
00:02:24.903  mv obj/sha512_one_block_sse.o.tmp obj/sha512_one_block_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse.o -o obj/mb_mgr_aes_cmac_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_cmac_submit_flush_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x8_sse.o.tmp obj/aes256_cbc_mac_x8_sse.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/sha256_ni_x2_sse.o.tmp obj/sha256_ni_x2_sse.o
00:02:24.903  mv obj/aes256_cbc_mac_x4.o.tmp obj/aes256_cbc_mac_x4.o
00:02:24.903  nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_cmac_submit_flush_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/sha1_one_block_sse.o.tmp obj/sha1_one_block_sse.o
00:02:24.903  mv obj/aes256_cbc_mac_x8_sse.o.tmp obj/aes256_cbc_mac_x8_sse.o
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x4.o.tmp obj/aes_xcbc_mac_128_x4.o
00:02:24.903  nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse_x8.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o -o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_cmac_submit_flush_sse_x8.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x8_sse.o.tmp obj/aes128_cbc_mac_x8_sse.o
00:02:24.903  mv obj/sha256_ni_x2_sse.o.tmp obj/sha256_ni_x2_sse.o
00:02:24.903  mv obj/sha1_one_block_sse.o.tmp obj/sha1_one_block_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_cmac_submit_flush_sse_x8.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_keyexp_256.o.tmp obj/aes_keyexp_256.o
00:02:24.903  mv obj/aes_xcbc_mac_128_x4.o.tmp obj/aes_xcbc_mac_128_x4.o
00:02:24.903  mv obj/aes128_cbc_mac_x8_sse.o.tmp obj/aes128_cbc_mac_x8_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_ccm_auth_submit_flush_sse.asm
00:02:24.903  nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.asm
00:02:24.903  mv obj/aes_keyexp_256.o.tmp obj/aes_keyexp_256.o
00:02:24.903  nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_ccm_auth_submit_flush_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_cfb_sse_no_aesni.o.tmp obj/aes_cfb_sse_no_aesni.o
00:02:24.903  nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.asm
00:02:24.903  nasm -MD obj/mb_mgr_aes_xcbc_flush_sse.d -MT obj/mb_mgr_aes_xcbc_flush_sse.o -o obj/mb_mgr_aes_xcbc_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_xcbc_flush_sse.asm
00:02:24.903  nasm -MD obj/mb_mgr_aes_xcbc_submit_sse.d -MT obj/mb_mgr_aes_xcbc_submit_sse.o -o obj/mb_mgr_aes_xcbc_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_xcbc_submit_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/sha1_ni_x2_sse.o.tmp obj/sha1_ni_x2_sse.o
00:02:24.903  mv obj/aes_cfb_sse_no_aesni.o.tmp obj/aes_cfb_sse_no_aesni.o
00:02:24.903  nasm -MD obj/mb_mgr_hmac_md5_flush_sse.d -MT obj/mb_mgr_hmac_md5_flush_sse.o -o obj/mb_mgr_hmac_md5_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_md5_flush_sse.asm
00:02:24.903  ld -r -z ibt -z shstk -o obj/aes_ecb_by4_sse.o.tmp obj/aes_ecb_by4_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_hmac_md5_submit_sse.d -MT obj/mb_mgr_hmac_md5_submit_sse.o -o obj/mb_mgr_hmac_md5_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_md5_submit_sse.asm
00:02:24.903  nasm -MD obj/mb_mgr_hmac_flush_sse.d -MT obj/mb_mgr_hmac_flush_sse.o -o obj/mb_mgr_hmac_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_flush_sse.asm
00:02:24.903  mv obj/sha1_ni_x2_sse.o.tmp obj/sha1_ni_x2_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_hmac_submit_sse.d -MT obj/mb_mgr_hmac_submit_sse.o -o obj/mb_mgr_hmac_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_submit_sse.asm
00:02:24.903  mv obj/aes_ecb_by4_sse.o.tmp obj/aes_ecb_by4_sse.o
00:02:24.903  nasm -MD obj/mb_mgr_hmac_sha_224_flush_sse.d -MT obj/mb_mgr_hmac_sha_224_flush_sse.o -o obj/mb_mgr_hmac_sha_224_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_flush_sse.asm
00:02:24.903  nasm -MD obj/mb_mgr_hmac_sha_224_submit_sse.d -MT obj/mb_mgr_hmac_sha_224_submit_sse.o -o obj/mb_mgr_hmac_sha_224_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_submit_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_256_flush_sse.d -MT obj/mb_mgr_hmac_sha_256_flush_sse.o -o obj/mb_mgr_hmac_sha_256_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_flush_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_256_submit_sse.d -MT obj/mb_mgr_hmac_sha_256_submit_sse.o -o obj/mb_mgr_hmac_sha_256_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_submit_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_384_flush_sse.d -MT obj/mb_mgr_hmac_sha_384_flush_sse.o -o obj/mb_mgr_hmac_sha_384_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_384_flush_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_384_submit_sse.d -MT obj/mb_mgr_hmac_sha_384_submit_sse.o -o obj/mb_mgr_hmac_sha_384_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_384_submit_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_512_flush_sse.d -MT obj/mb_mgr_hmac_sha_512_flush_sse.o -o obj/mb_mgr_hmac_sha_512_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_512_flush_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse.o.tmp obj/mb_mgr_aes_flush_sse.o
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_512_submit_sse.d -MT obj/mb_mgr_hmac_sha_512_submit_sse.o -o obj/mb_mgr_hmac_sha_512_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_512_submit_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_flush_ni_sse.d -MT obj/mb_mgr_hmac_flush_ni_sse.o -o obj/mb_mgr_hmac_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_flush_ni_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_submit_ni_sse.d -MT obj/mb_mgr_hmac_submit_ni_sse.o -o obj/mb_mgr_hmac_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_submit_ni_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse.o.tmp obj/mb_mgr_aes_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse.o.tmp obj/mb_mgr_aes192_flush_sse.o
00:02:24.904  mv obj/mb_mgr_aes_flush_sse.o.tmp obj/mb_mgr_aes_flush_sse.o
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_224_flush_ni_sse.d -MT obj/mb_mgr_hmac_sha_224_flush_ni_sse.o -o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_flush_ni_sse.asm
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_224_submit_ni_sse.d -MT obj/mb_mgr_hmac_sha_224_submit_ni_sse.o -o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_submit_ni_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse.o.tmp obj/mb_mgr_aes192_submit_sse.o
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_256_flush_ni_sse.d -MT obj/mb_mgr_hmac_sha_256_flush_ni_sse.o -o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_flush_ni_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse.o.tmp obj/mb_mgr_aes256_flush_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse_x8.o.tmp obj/mb_mgr_aes_flush_sse_x8.o
00:02:24.904  mv obj/mb_mgr_aes_submit_sse.o.tmp obj/mb_mgr_aes_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse.o.tmp obj/mb_mgr_aes256_submit_sse.o
00:02:24.904  mv obj/mb_mgr_aes192_flush_sse.o.tmp obj/mb_mgr_aes192_flush_sse.o
00:02:24.904  nasm -MD obj/mb_mgr_hmac_sha_256_submit_ni_sse.d -MT obj/mb_mgr_hmac_sha_256_submit_ni_sse.o -o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_submit_ni_sse.asm
00:02:24.904  mv obj/mb_mgr_aes192_submit_sse.o.tmp obj/mb_mgr_aes192_submit_sse.o
00:02:24.904  mv obj/mb_mgr_aes256_flush_sse.o.tmp obj/mb_mgr_aes256_flush_sse.o
00:02:24.904  nasm -MD obj/mb_mgr_zuc_submit_flush_sse.d -MT obj/mb_mgr_zuc_submit_flush_sse.o -o obj/mb_mgr_zuc_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_zuc_submit_flush_sse.asm
00:02:24.904  mv obj/mb_mgr_aes_flush_sse_x8.o.tmp obj/mb_mgr_aes_flush_sse_x8.o
00:02:24.904  mv obj/mb_mgr_aes256_submit_sse.o.tmp obj/mb_mgr_aes256_submit_sse.o
00:02:24.904  nasm -MD obj/mb_mgr_zuc_submit_flush_gfni_sse.d -MT obj/mb_mgr_zuc_submit_flush_gfni_sse.o -o obj/mb_mgr_zuc_submit_flush_gfni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_zuc_submit_flush_gfni_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse_x8.o.tmp obj/mb_mgr_aes_submit_sse_x8.o
00:02:24.904  nasm -MD obj/ethernet_fcs_sse.d -MT obj/ethernet_fcs_sse.o -o obj/ethernet_fcs_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/ethernet_fcs_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse_x8.o.tmp obj/mb_mgr_aes192_flush_sse_x8.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse_x8.o.tmp obj/mb_mgr_aes192_submit_sse_x8.o
00:02:24.904  nasm -MD obj/crc16_x25_sse.d -MT obj/crc16_x25_sse.o -o obj/crc16_x25_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc16_x25_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_sse.o.tmp obj/aes128_cntr_ccm_by8_sse.o
00:02:24.904  mv obj/mb_mgr_aes_submit_sse_x8.o.tmp obj/mb_mgr_aes_submit_sse_x8.o
00:02:24.904  nasm -MD obj/crc32_sctp_sse.d -MT obj/crc32_sctp_sse.o -o obj/crc32_sctp_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_sctp_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_sse.o.tmp obj/aes256_cntr_ccm_by8_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse_x8.o.tmp obj/mb_mgr_aes256_flush_sse_x8.o
00:02:24.904  mv obj/mb_mgr_aes192_flush_sse_x8.o.tmp obj/mb_mgr_aes192_flush_sse_x8.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse_x8.o.tmp obj/mb_mgr_aes256_submit_sse_x8.o
00:02:24.904  mv obj/mb_mgr_aes192_submit_sse_x8.o.tmp obj/mb_mgr_aes192_submit_sse_x8.o
00:02:24.904  mv obj/aes128_cntr_ccm_by8_sse.o.tmp obj/aes128_cntr_ccm_by8_sse.o
00:02:24.904  nasm -MD obj/aes_cbcs_1_9_enc_128_x4.d -MT obj/aes_cbcs_1_9_enc_128_x4.o -o obj/aes_cbcs_1_9_enc_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbcs_1_9_enc_128_x4.asm
00:02:24.904  nasm -MD obj/aes128_cbcs_1_9_dec_by4_sse.d -MT obj/aes128_cbcs_1_9_dec_by4_sse.o -o obj/aes128_cbcs_1_9_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbcs_1_9_dec_by4_sse.asm
00:02:24.904  mv obj/aes256_cntr_ccm_by8_sse.o.tmp obj/aes256_cntr_ccm_by8_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc16_x25_sse.o.tmp obj/crc16_x25_sse.o
00:02:24.904  mv obj/mb_mgr_aes256_flush_sse_x8.o.tmp obj/mb_mgr_aes256_flush_sse_x8.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/ethernet_fcs_sse.o.tmp obj/ethernet_fcs_sse.o
00:02:24.904  mv obj/mb_mgr_aes256_submit_sse_x8.o.tmp obj/mb_mgr_aes256_submit_sse_x8.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc32_sctp_sse.o.tmp obj/crc32_sctp_sse.o
00:02:24.904  nasm -MD obj/crc32_refl_by8_sse.d -MT obj/crc32_refl_by8_sse.o -o obj/crc32_refl_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_refl_by8_sse.asm
00:02:24.904  nasm -MD obj/crc32_by8_sse.d -MT obj/crc32_by8_sse.o -o obj/crc32_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_by8_sse.asm
00:02:24.904  mv obj/crc16_x25_sse.o.tmp obj/crc16_x25_sse.o
00:02:24.904  mv obj/crc32_sctp_sse.o.tmp obj/crc32_sctp_sse.o
00:02:24.904  mv obj/ethernet_fcs_sse.o.tmp obj/ethernet_fcs_sse.o
00:02:24.904  nasm -MD obj/crc32_lte_sse.d -MT obj/crc32_lte_sse.o -o obj/crc32_lte_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_lte_sse.asm
00:02:24.904  nasm -MD obj/crc32_fp_sse.d -MT obj/crc32_fp_sse.o -o obj/crc32_fp_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_fp_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o
00:02:24.904  nasm -MD obj/crc32_iuup_sse.d -MT obj/crc32_iuup_sse.o -o obj/crc32_iuup_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_iuup_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_sse.o.tmp obj/mb_mgr_aes_xcbc_flush_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_sse.o.tmp obj/mb_mgr_aes_xcbc_submit_sse.o
00:02:24.904  nasm -MD obj/crc32_wimax_sse.d -MT obj/crc32_wimax_sse.o -o obj/crc32_wimax_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_wimax_sse.asm
00:02:24.904  mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o
00:02:24.904  nasm -MD obj/chacha20_sse.d -MT obj/chacha20_sse.o -o obj/chacha20_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/chacha20_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/sha_256_mult_sse.o.tmp obj/sha_256_mult_sse.o
00:02:24.904  nasm -MD obj/memcpy_sse.d -MT obj/memcpy_sse.o -o obj/memcpy_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/memcpy_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by4_sse.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse.o
00:02:24.904  mv obj/mb_mgr_aes_xcbc_flush_sse.o.tmp obj/mb_mgr_aes_xcbc_flush_sse.o
00:02:24.904  mv obj/mb_mgr_aes_xcbc_submit_sse.o.tmp obj/mb_mgr_aes_xcbc_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/sha512_x2_sse.o.tmp obj/sha512_x2_sse.o
00:02:24.904  nasm -MD obj/gcm128_sse.d -MT obj/gcm128_sse.o -o obj/gcm128_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm128_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc32_lte_sse.o.tmp obj/crc32_lte_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc32_fp_sse.o.tmp obj/crc32_fp_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc32_wimax_sse.o.tmp obj/crc32_wimax_sse.o
00:02:24.904  mv obj/sha_256_mult_sse.o.tmp obj/sha_256_mult_sse.o
00:02:24.904  mv obj/aes128_cbcs_1_9_dec_by4_sse.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc32_by8_sse.o.tmp obj/crc32_by8_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_sse.o.tmp obj/mb_mgr_hmac_md5_flush_sse.o
00:02:24.904  mv obj/sha512_x2_sse.o.tmp obj/sha512_x2_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc32_iuup_sse.o.tmp obj/crc32_iuup_sse.o
00:02:24.904  mv obj/crc32_fp_sse.o.tmp obj/crc32_fp_sse.o
00:02:24.904  mv obj/crc32_lte_sse.o.tmp obj/crc32_lte_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_sse.o.tmp obj/mb_mgr_hmac_flush_sse.o
00:02:24.904  mv obj/crc32_wimax_sse.o.tmp obj/crc32_wimax_sse.o
00:02:24.904  nasm -MD obj/gcm192_sse.d -MT obj/gcm192_sse.o -o obj/gcm192_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm192_sse.asm
00:02:24.904  mv obj/crc32_by8_sse.o.tmp obj/crc32_by8_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_md5_flush_sse.o.tmp obj/mb_mgr_hmac_md5_flush_sse.o
00:02:24.904  mv obj/crc32_iuup_sse.o.tmp obj/crc32_iuup_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_flush_sse.o.tmp obj/mb_mgr_hmac_flush_sse.o
00:02:24.904  nasm -MD obj/gcm256_sse.d -MT obj/gcm256_sse.o -o obj/gcm256_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm256_sse.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/sha1_mult_sse.o.tmp obj/sha1_mult_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_ni_sse.o.tmp obj/mb_mgr_hmac_flush_ni_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_ni_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/crc32_refl_by8_sse.o.tmp obj/crc32_refl_by8_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/memcpy_sse.o.tmp obj/memcpy_sse.o
00:02:24.904  nasm -MD obj/aes_cbc_enc_128_x8.d -MT obj/aes_cbc_enc_128_x8.o -o obj/aes_cbc_enc_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_128_x8.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_sse.o.tmp obj/mb_mgr_hmac_md5_submit_sse.o
00:02:24.904  mv obj/sha1_mult_sse.o.tmp obj/sha1_mult_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_224_flush_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_256_flush_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_flush_ni_sse.o.tmp obj/mb_mgr_hmac_flush_ni_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_256_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_ni_sse.o
00:02:24.904  mv obj/crc32_refl_by8_sse.o.tmp obj/crc32_refl_by8_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse.o
00:02:24.904  mv obj/memcpy_sse.o.tmp obj/memcpy_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_md5_submit_sse.o.tmp obj/mb_mgr_hmac_md5_submit_sse.o
00:02:24.904  nasm -MD obj/aes_cbc_enc_192_x8.d -MT obj/aes_cbc_enc_192_x8.o -o obj/aes_cbc_enc_192_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_192_x8.asm
00:02:24.904  mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_ni_sse.o
00:02:24.904  mv obj/mb_mgr_aes256_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse.o
00:02:24.904  nasm -MD obj/aes_cbc_enc_256_x8.d -MT obj/aes_cbc_enc_256_x8.o -o obj/aes_cbc_enc_256_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_256_x8.asm
00:02:24.904  nasm -MD obj/aes128_cbc_dec_by8_avx.d -MT obj/aes128_cbc_dec_by8_avx.o -o obj/aes128_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbc_dec_by8_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_sse.o.tmp obj/aes256_cntr_by8_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_224_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_ni_sse.o
00:02:24.904  nasm -MD obj/aes192_cbc_dec_by8_avx.d -MT obj/aes192_cbc_dec_by8_avx.o -o obj/aes192_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes192_cbc_dec_by8_avx.asm
00:02:24.904  nasm -MD obj/aes256_cbc_dec_by8_avx.d -MT obj/aes256_cbc_dec_by8_avx.o -o obj/aes256_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cbc_dec_by8_avx.asm
00:02:24.904  nasm -MD obj/pon_avx.d -MT obj/pon_avx.o -o obj/pon_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/pon_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_sse.o.tmp obj/mb_mgr_hmac_sha_512_flush_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_sse.o.tmp obj/mb_mgr_hmac_sha_512_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x4.o.tmp obj/aes_cbcs_1_9_enc_128_x4.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_224_submit_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_sse.o
00:02:24.904  mv obj/aes256_cntr_by8_sse.o.tmp obj/aes256_cntr_by8_sse.o
00:02:24.904  nasm -MD obj/aes128_cntr_by8_avx.d -MT obj/aes128_cntr_by8_avx.o -o obj/aes128_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cntr_by8_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_sse.o.tmp obj/mb_mgr_hmac_sha_384_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_sse.o.tmp obj/mb_mgr_hmac_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_ni_sse.o.tmp obj/mb_mgr_hmac_submit_ni_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_ni_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_512_flush_sse.o.tmp obj/mb_mgr_hmac_sha_512_flush_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_512_submit_sse.o.tmp obj/mb_mgr_hmac_sha_512_submit_sse.o
00:02:24.904  mv obj/aes_cbcs_1_9_enc_128_x4.o.tmp obj/aes_cbcs_1_9_enc_128_x4.o
00:02:24.904  mv obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_384_submit_sse.o.tmp obj/mb_mgr_hmac_sha_384_submit_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_submit_sse.o.tmp obj/mb_mgr_hmac_submit_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_ni_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_sse.o.tmp obj/mb_mgr_hmac_sha_384_flush_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_submit_ni_sse.o.tmp obj/mb_mgr_hmac_submit_ni_sse.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_224_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_ni_sse.o
00:02:24.904  nasm -MD obj/aes192_cntr_by8_avx.d -MT obj/aes192_cntr_by8_avx.o -o obj/aes192_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes192_cntr_by8_avx.asm
00:02:24.904  mv obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_256_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_ni_sse.o
00:02:24.904  nasm -MD obj/aes256_cntr_by8_avx.d -MT obj/aes256_cntr_by8_avx.o -o obj/aes256_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cntr_by8_avx.asm
00:02:24.904  mv obj/mb_mgr_hmac_sha_384_flush_sse.o.tmp obj/mb_mgr_hmac_sha_384_flush_sse.o
00:02:24.904  nasm -MD obj/aes128_cntr_ccm_by8_avx.d -MT obj/aes128_cntr_ccm_by8_avx.o -o obj/aes128_cntr_ccm_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cntr_ccm_by8_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_sse.o
00:02:24.904  nasm -MD obj/aes256_cntr_ccm_by8_avx.d -MT obj/aes256_cntr_ccm_by8_avx.o -o obj/aes256_cntr_ccm_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cntr_ccm_by8_avx.asm
00:02:24.904  nasm -MD obj/aes_ecb_by4_avx.d -MT obj/aes_ecb_by4_avx.o -o obj/aes_ecb_by4_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_ecb_by4_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x8.o.tmp obj/aes_cbc_enc_128_x8.o
00:02:24.904  mv obj/mb_mgr_hmac_sha_256_submit_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_sse.o
00:02:24.904  nasm -MD obj/aes_cfb_avx.d -MT obj/aes_cfb_avx.o -o obj/aes_cfb_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cfb_avx.asm
00:02:24.904  nasm -MD obj/aes128_cbc_mac_x8.d -MT obj/aes128_cbc_mac_x8.o -o obj/aes128_cbc_mac_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbc_mac_x8.asm
00:02:24.904  mv obj/mb_mgr_aes_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse.o
00:02:24.904  nasm -MD obj/aes256_cbc_mac_x8.d -MT obj/aes256_cbc_mac_x8.o -o obj/aes256_cbc_mac_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cbc_mac_x8.asm
00:02:24.904  mv obj/aes_cbc_enc_128_x8.o.tmp obj/aes_cbc_enc_128_x8.o
00:02:24.904  nasm -MD obj/aes_xcbc_mac_128_x8.d -MT obj/aes_xcbc_mac_128_x8.o -o obj/aes_xcbc_mac_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_xcbc_mac_128_x8.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by8_avx.o.tmp obj/aes128_cbc_dec_by8_avx.o
00:02:24.904  nasm -MD obj/md5_x4x2_avx.d -MT obj/md5_x4x2_avx.o -o obj/md5_x4x2_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/md5_x4x2_avx.asm
00:02:24.904  nasm -MD obj/sha1_mult_avx.d -MT obj/sha1_mult_avx.o -o obj/sha1_mult_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha1_mult_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_sse.o.tmp obj/aes128_cntr_by8_sse.o
00:02:24.904  mv obj/aes128_cbc_dec_by8_avx.o.tmp obj/aes128_cbc_dec_by8_avx.o
00:02:24.904  nasm -MD obj/sha1_one_block_avx.d -MT obj/sha1_one_block_avx.o -o obj/sha1_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha1_one_block_avx.asm
00:02:24.904  nasm -MD obj/sha224_one_block_avx.d -MT obj/sha224_one_block_avx.o -o obj/sha224_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha224_one_block_avx.asm
00:02:24.904  mv obj/aes128_cntr_by8_sse.o.tmp obj/aes128_cntr_by8_sse.o
00:02:24.904  nasm -MD obj/sha256_one_block_avx.d -MT obj/sha256_one_block_avx.o -o obj/sha256_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha256_one_block_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by8_avx.o.tmp obj/aes192_cbc_dec_by8_avx.o
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_sse.o.tmp obj/aes192_cntr_by8_sse.o
00:02:24.904  nasm -MD obj/sha_256_mult_avx.d -MT obj/sha_256_mult_avx.o -o obj/sha_256_mult_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha_256_mult_avx.asm
00:02:24.904  ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by8_avx.o.tmp obj/aes256_cbc_dec_by8_avx.o
00:02:24.905  nasm -MD obj/sha384_one_block_avx.d -MT obj/sha384_one_block_avx.o -o obj/sha384_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha384_one_block_avx.asm
00:02:24.905  mv obj/aes192_cbc_dec_by8_avx.o.tmp obj/aes192_cbc_dec_by8_avx.o
00:02:24.905  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o
00:02:24.905  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o
00:02:24.905  mv obj/aes192_cntr_by8_sse.o.tmp obj/aes192_cntr_by8_sse.o
00:02:24.905  nasm -MD obj/sha512_one_block_avx.d -MT obj/sha512_one_block_avx.o -o obj/sha512_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha512_one_block_avx.asm
00:02:24.905  mv obj/aes256_cbc_dec_by8_avx.o.tmp obj/aes256_cbc_dec_by8_avx.o
00:02:24.905  nasm -MD obj/sha512_x2_avx.d -MT obj/sha512_x2_avx.o -o obj/sha512_x2_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha512_x2_avx.asm
00:02:24.905  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o
00:02:24.905  mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o
00:02:24.905  mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o
00:02:24.905  nasm -MD obj/zuc_avx.d -MT obj/zuc_avx.o -o obj/zuc_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/zuc_avx.asm
00:02:24.905  ld -r -z ibt -z shstk -o obj/crc32_by8_sse_no_aesni.o.tmp obj/crc32_by8_sse_no_aesni.o
00:02:24.905  nasm -MD obj/mb_mgr_aes_flush_avx.d -MT obj/mb_mgr_aes_flush_avx.o -o obj/mb_mgr_aes_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_flush_avx.asm
00:02:24.905  ld -r -z ibt -z shstk -o obj/pon_sse.o.tmp obj/pon_sse.o
00:02:24.905  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o
00:02:24.905  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x8.o.tmp obj/aes_cbc_enc_192_x8.o
00:02:24.905  mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o
00:02:24.905  mv obj/crc32_by8_sse_no_aesni.o.tmp obj/crc32_by8_sse_no_aesni.o
00:02:24.905  nasm -MD obj/mb_mgr_aes_submit_avx.d -MT obj/mb_mgr_aes_submit_avx.o -o obj/mb_mgr_aes_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_submit_avx.asm
00:02:24.905  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x8.o.tmp obj/aes_cbc_enc_256_x8.o
00:02:24.905  mv obj/pon_sse.o.tmp obj/pon_sse.o
00:02:24.905  nasm -MD obj/mb_mgr_aes192_flush_avx.d -MT obj/mb_mgr_aes192_flush_avx.o -o obj/mb_mgr_aes192_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes192_flush_avx.asm
00:02:24.905  ld -r -z ibt -z shstk -o obj/aes_cfb_avx.o.tmp obj/aes_cfb_avx.o
00:02:24.905  mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o
00:02:24.905  mv obj/aes_cbc_enc_192_x8.o.tmp obj/aes_cbc_enc_192_x8.o
00:02:24.905  mv obj/aes_cbc_enc_256_x8.o.tmp obj/aes_cbc_enc_256_x8.o
00:02:24.905  nasm -MD obj/mb_mgr_aes192_submit_avx.d -MT obj/mb_mgr_aes192_submit_avx.o -o obj/mb_mgr_aes192_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes192_submit_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_aes256_flush_avx.d -MT obj/mb_mgr_aes256_flush_avx.o -o obj/mb_mgr_aes256_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_flush_avx.asm
00:02:25.171  mv obj/aes_cfb_avx.o.tmp obj/aes_cfb_avx.o
00:02:25.171  nasm -MD obj/mb_mgr_aes256_submit_avx.d -MT obj/mb_mgr_aes256_submit_avx.o -o obj/mb_mgr_aes256_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_submit_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x8.o.tmp obj/aes_xcbc_mac_128_x8.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/sha224_one_block_avx.o.tmp obj/sha224_one_block_avx.o
00:02:25.171  nasm -MD obj/mb_mgr_aes_cmac_submit_flush_avx.d -MT obj/mb_mgr_aes_cmac_submit_flush_avx.o -o obj/mb_mgr_aes_cmac_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_cmac_submit_flush_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_avx.d -MT obj/mb_mgr_aes256_cmac_submit_flush_avx.o -o obj/mb_mgr_aes256_cmac_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_cmac_submit_flush_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/aes_ecb_by4_avx.o.tmp obj/aes_ecb_by4_avx.o
00:02:25.171  mv obj/aes_xcbc_mac_128_x8.o.tmp obj/aes_xcbc_mac_128_x8.o
00:02:25.171  nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_avx.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_ccm_auth_submit_flush_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/sha256_one_block_avx.o.tmp obj/sha256_one_block_avx.o
00:02:25.171  mv obj/sha224_one_block_avx.o.tmp obj/sha224_one_block_avx.o
00:02:25.171  mv obj/aes_ecb_by4_avx.o.tmp obj/aes_ecb_by4_avx.o
00:02:25.171  nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_ccm_auth_submit_flush_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_aes_xcbc_flush_avx.d -MT obj/mb_mgr_aes_xcbc_flush_avx.o -o obj/mb_mgr_aes_xcbc_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_xcbc_flush_avx.asm
00:02:25.171  mv obj/sha256_one_block_avx.o.tmp obj/sha256_one_block_avx.o
00:02:25.171  nasm -MD obj/mb_mgr_aes_xcbc_submit_avx.d -MT obj/mb_mgr_aes_xcbc_submit_avx.o -o obj/mb_mgr_aes_xcbc_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_xcbc_submit_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/sha384_one_block_avx.o.tmp obj/sha384_one_block_avx.o
00:02:25.171  nasm -MD obj/mb_mgr_hmac_md5_flush_avx.d -MT obj/mb_mgr_hmac_md5_flush_avx.o -o obj/mb_mgr_hmac_md5_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_md5_flush_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_hmac_md5_submit_avx.d -MT obj/mb_mgr_hmac_md5_submit_avx.o -o obj/mb_mgr_hmac_md5_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_md5_submit_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/sha1_one_block_avx.o.tmp obj/sha1_one_block_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/sha512_one_block_avx.o.tmp obj/sha512_one_block_avx.o
00:02:25.171  mv obj/sha384_one_block_avx.o.tmp obj/sha384_one_block_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x8.o.tmp obj/aes128_cbc_mac_x8.o
00:02:25.171  nasm -MD obj/mb_mgr_hmac_flush_avx.d -MT obj/mb_mgr_hmac_flush_avx.o -o obj/mb_mgr_hmac_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_flush_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x8.o.tmp obj/aes256_cbc_mac_x8.o
00:02:25.171  mv obj/sha1_one_block_avx.o.tmp obj/sha1_one_block_avx.o
00:02:25.171  nasm -MD obj/mb_mgr_hmac_submit_avx.d -MT obj/mb_mgr_hmac_submit_avx.o -o obj/mb_mgr_hmac_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_submit_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/md5_x4x2_sse.o.tmp obj/md5_x4x2_sse.o
00:02:25.171  mv obj/sha512_one_block_avx.o.tmp obj/sha512_one_block_avx.o
00:02:25.171  mv obj/aes128_cbc_mac_x8.o.tmp obj/aes128_cbc_mac_x8.o
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx.d -MT obj/mb_mgr_hmac_sha_224_flush_avx.o -o obj/mb_mgr_hmac_sha_224_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_224_flush_avx.asm
00:02:25.171  mv obj/aes256_cbc_mac_x8.o.tmp obj/aes256_cbc_mac_x8.o
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx.d -MT obj/mb_mgr_hmac_sha_224_submit_avx.o -o obj/mb_mgr_hmac_sha_224_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_224_submit_avx.asm
00:02:25.171  mv obj/md5_x4x2_sse.o.tmp obj/md5_x4x2_sse.o
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx.d -MT obj/mb_mgr_hmac_sha_256_flush_avx.o -o obj/mb_mgr_hmac_sha_256_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_256_flush_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx.d -MT obj/mb_mgr_hmac_sha_256_submit_avx.o -o obj/mb_mgr_hmac_sha_256_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_256_submit_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx.d -MT obj/mb_mgr_hmac_sha_384_flush_avx.o -o obj/mb_mgr_hmac_sha_384_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_384_flush_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx.d -MT obj/mb_mgr_hmac_sha_384_submit_avx.o -o obj/mb_mgr_hmac_sha_384_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_384_submit_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx.d -MT obj/mb_mgr_hmac_sha_512_flush_avx.o -o obj/mb_mgr_hmac_sha_512_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_512_flush_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx.d -MT obj/mb_mgr_hmac_sha_512_submit_avx.o -o obj/mb_mgr_hmac_sha_512_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_512_submit_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_zuc_submit_flush_avx.d -MT obj/mb_mgr_zuc_submit_flush_avx.o -o obj/mb_mgr_zuc_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_zuc_submit_flush_avx.asm
00:02:25.171  nasm -MD obj/ethernet_fcs_avx.d -MT obj/ethernet_fcs_avx.o -o obj/ethernet_fcs_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/ethernet_fcs_avx.asm
00:02:25.171  nasm -MD obj/crc16_x25_avx.d -MT obj/crc16_x25_avx.o -o obj/crc16_x25_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc16_x25_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_avx.o.tmp obj/mb_mgr_aes192_submit_avx.o
00:02:25.171  nasm -MD obj/aes_cbcs_1_9_enc_128_x8.d -MT obj/aes_cbcs_1_9_enc_128_x8.o -o obj/aes_cbcs_1_9_enc_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbcs_1_9_enc_128_x8.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_avx.o.tmp obj/mb_mgr_aes_flush_avx.o
00:02:25.171  nasm -MD obj/aes128_cbcs_1_9_dec_by8_avx.d -MT obj/aes128_cbcs_1_9_dec_by8_avx.o -o obj/aes128_cbcs_1_9_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbcs_1_9_dec_by8_avx.asm
00:02:25.171  nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_avx.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes128_cbcs_1_9_submit_avx.asm
00:02:25.171  mv obj/mb_mgr_aes192_submit_avx.o.tmp obj/mb_mgr_aes192_submit_avx.o
00:02:25.171  nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_avx.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes128_cbcs_1_9_flush_avx.asm
00:02:25.171  mv obj/mb_mgr_aes_flush_avx.o.tmp obj/mb_mgr_aes_flush_avx.o
00:02:25.171  nasm -MD obj/crc32_refl_by8_avx.d -MT obj/crc32_refl_by8_avx.o -o obj/crc32_refl_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_refl_by8_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_avx.o.tmp obj/mb_mgr_aes_xcbc_flush_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_avx.o.tmp obj/mb_mgr_aes256_flush_avx.o
00:02:25.171  nasm -MD obj/crc32_by8_avx.d -MT obj/crc32_by8_avx.o -o obj/crc32_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_by8_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/ethernet_fcs_avx.o.tmp obj/ethernet_fcs_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/crc16_x25_avx.o.tmp obj/crc16_x25_avx.o
00:02:25.171  nasm -MD obj/crc32_sctp_avx.d -MT obj/crc32_sctp_avx.o -o obj/crc32_sctp_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_sctp_avx.asm
00:02:25.171  mv obj/mb_mgr_aes256_flush_avx.o.tmp obj/mb_mgr_aes256_flush_avx.o
00:02:25.171  mv obj/mb_mgr_aes_xcbc_flush_avx.o.tmp obj/mb_mgr_aes_xcbc_flush_avx.o
00:02:25.171  nasm -MD obj/crc32_lte_avx.d -MT obj/crc32_lte_avx.o -o obj/crc32_lte_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_lte_avx.asm
00:02:25.171  mv obj/ethernet_fcs_avx.o.tmp obj/ethernet_fcs_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_avx.o.tmp obj/mb_mgr_aes_submit_avx.o
00:02:25.171  mv obj/crc16_x25_avx.o.tmp obj/crc16_x25_avx.o
00:02:25.171  nasm -MD obj/crc32_fp_avx.d -MT obj/crc32_fp_avx.o -o obj/crc32_fp_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_fp_avx.asm
00:02:25.171  nasm -MD obj/crc32_iuup_avx.d -MT obj/crc32_iuup_avx.o -o obj/crc32_iuup_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_iuup_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_sse.o.tmp obj/mb_mgr_zuc_submit_flush_sse.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_avx.o.tmp obj/mb_mgr_aes192_flush_avx.o
00:02:25.171  mv obj/mb_mgr_aes_submit_avx.o.tmp obj/mb_mgr_aes_submit_avx.o
00:02:25.171  nasm -MD obj/crc32_wimax_avx.d -MT obj/crc32_wimax_avx.o -o obj/crc32_wimax_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_wimax_avx.asm
00:02:25.171  nasm -MD obj/chacha20_avx.d -MT obj/chacha20_avx.o -o obj/chacha20_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/chacha20_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_gfni_sse.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_sse.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_avx.o.tmp obj/mb_mgr_aes256_submit_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/crc32_sctp_avx.o.tmp obj/crc32_sctp_avx.o
00:02:25.171  mv obj/mb_mgr_zuc_submit_flush_sse.o.tmp obj/mb_mgr_zuc_submit_flush_sse.o
00:02:25.171  mv obj/mb_mgr_aes192_flush_avx.o.tmp obj/mb_mgr_aes192_flush_avx.o
00:02:25.171  nasm -MD obj/memcpy_avx.d -MT obj/memcpy_avx.o -o obj/memcpy_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/memcpy_avx.asm
00:02:25.171  ld -r -z ibt -z shstk -o obj/crc32_lte_avx.o.tmp obj/crc32_lte_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/crc32_iuup_avx.o.tmp obj/crc32_iuup_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx.o
00:02:25.171  mv obj/mb_mgr_zuc_submit_flush_gfni_sse.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_sse.o
00:02:25.171  mv obj/mb_mgr_aes256_submit_avx.o.tmp obj/mb_mgr_aes256_submit_avx.o
00:02:25.171  mv obj/crc32_sctp_avx.o.tmp obj/crc32_sctp_avx.o
00:02:25.171  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_avx.o.tmp obj/mb_mgr_hmac_md5_flush_avx.o
00:02:25.171  nasm -MD obj/gcm128_avx_gen2.d -MT obj/gcm128_avx_gen2.o -o obj/gcm128_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm128_avx_gen2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/crc32_fp_avx.o.tmp obj/crc32_fp_avx.o
00:02:25.172  mv obj/crc32_lte_avx.o.tmp obj/crc32_lte_avx.o
00:02:25.172  mv obj/crc32_iuup_avx.o.tmp obj/crc32_iuup_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_sha_256_flush_avx.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx.o
00:02:25.172  nasm -MD obj/gcm192_avx_gen2.d -MT obj/gcm192_avx_gen2.o -o obj/gcm192_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm192_avx_gen2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/crc32_wimax_avx.o.tmp obj/crc32_wimax_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/memcpy_avx.o.tmp obj/memcpy_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_md5_flush_avx.o.tmp obj/mb_mgr_hmac_md5_flush_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/crc32_refl_by8_avx.o.tmp obj/crc32_refl_by8_avx.o
00:02:25.172  mv obj/crc32_fp_avx.o.tmp obj/crc32_fp_avx.o
00:02:25.172  nasm -MD obj/gcm256_avx_gen2.d -MT obj/gcm256_avx_gen2.o -o obj/gcm256_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm256_avx_gen2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/crc32_by8_avx.o.tmp obj/crc32_by8_avx.o
00:02:25.172  mv obj/crc32_wimax_avx.o.tmp obj/crc32_wimax_avx.o
00:02:25.172  mv obj/memcpy_avx.o.tmp obj/memcpy_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_avx.o.tmp obj/aes256_cntr_ccm_by8_avx.o
00:02:25.172  mv obj/crc32_refl_by8_avx.o.tmp obj/crc32_refl_by8_avx.o
00:02:25.172  nasm -MD obj/md5_x8x2_avx2.d -MT obj/md5_x8x2_avx2.o -o obj/md5_x8x2_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/md5_x8x2_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_avx.o.tmp obj/aes128_cntr_ccm_by8_avx.o
00:02:25.172  mv obj/crc32_by8_avx.o.tmp obj/crc32_by8_avx.o
00:02:25.172  nasm -MD obj/sha1_x8_avx2.d -MT obj/sha1_x8_avx2.o -o obj/sha1_x8_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha1_x8_avx2.asm
00:02:25.172  mv obj/aes256_cntr_ccm_by8_avx.o.tmp obj/aes256_cntr_ccm_by8_avx.o
00:02:25.172  nasm -MD obj/sha256_oct_avx2.d -MT obj/sha256_oct_avx2.o -o obj/sha256_oct_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha256_oct_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_avx.o.tmp obj/mb_mgr_aes_xcbc_submit_avx.o
00:02:25.172  mv obj/aes128_cntr_ccm_by8_avx.o.tmp obj/aes128_cntr_ccm_by8_avx.o
00:02:25.172  nasm -MD obj/sha512_x4_avx2.d -MT obj/sha512_x4_avx2.o -o obj/sha512_x4_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha512_x4_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx.o.tmp obj/mb_mgr_hmac_flush_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/sha_256_mult_avx.o.tmp obj/sha_256_mult_avx.o
00:02:25.172  nasm -MD obj/zuc_avx2.d -MT obj/zuc_avx2.o -o obj/zuc_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/zuc_avx2.asm
00:02:25.172  mv obj/mb_mgr_aes_xcbc_submit_avx.o.tmp obj/mb_mgr_aes_xcbc_submit_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_md5_flush_avx2.d -MT obj/mb_mgr_hmac_md5_flush_avx2.o -o obj/mb_mgr_hmac_md5_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_md5_flush_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_md5_submit_avx2.d -MT obj/mb_mgr_hmac_md5_submit_avx2.o -o obj/mb_mgr_hmac_md5_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_md5_submit_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/sha512_x2_avx.o.tmp obj/sha512_x2_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by8_avx.o.tmp obj/aes128_cbcs_1_9_dec_by8_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_flush_avx.o.tmp obj/mb_mgr_hmac_flush_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_avx.o.tmp obj/mb_mgr_hmac_md5_submit_avx.o
00:02:25.172  mv obj/sha_256_mult_avx.o.tmp obj/sha_256_mult_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_flush_avx2.d -MT obj/mb_mgr_hmac_flush_avx2.o -o obj/mb_mgr_hmac_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_flush_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x8.o.tmp obj/aes_cbcs_1_9_enc_128_x8.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_sha_224_flush_avx.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx.o
00:02:25.172  mv obj/sha512_x2_avx.o.tmp obj/sha512_x2_avx.o
00:02:25.172  mv obj/aes128_cbcs_1_9_dec_by8_avx.o.tmp obj/aes128_cbcs_1_9_dec_by8_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_submit_avx2.d -MT obj/mb_mgr_hmac_submit_avx2.o -o obj/mb_mgr_hmac_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_submit_avx2.asm
00:02:25.172  mv obj/mb_mgr_hmac_md5_submit_avx.o.tmp obj/mb_mgr_hmac_md5_submit_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx2.d -MT obj/mb_mgr_hmac_sha_224_flush_avx2.o -o obj/mb_mgr_hmac_sha_224_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_224_flush_avx2.asm
00:02:25.172  mv obj/aes_cbcs_1_9_enc_128_x8.o.tmp obj/aes_cbcs_1_9_enc_128_x8.o
00:02:25.172  mv obj/mb_mgr_hmac_sha_256_submit_avx.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx2.d -MT obj/mb_mgr_hmac_sha_224_submit_avx2.o -o obj/mb_mgr_hmac_sha_224_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_224_submit_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx2.d -MT obj/mb_mgr_hmac_sha_256_flush_avx2.o -o obj/mb_mgr_hmac_sha_256_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_256_flush_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx2.d -MT obj/mb_mgr_hmac_sha_256_submit_avx2.o -o obj/mb_mgr_hmac_sha_256_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_256_submit_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_avx.o.tmp obj/aes128_cntr_by8_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_sha_224_submit_avx.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx2.d -MT obj/mb_mgr_hmac_sha_384_flush_avx2.o -o obj/mb_mgr_hmac_sha_384_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_384_flush_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes_cmac_submit_flush_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx.o
00:02:25.172  mv obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx2.d -MT obj/mb_mgr_hmac_sha_384_submit_avx2.o -o obj/mb_mgr_hmac_sha_384_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_384_submit_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx.o.tmp obj/mb_mgr_hmac_submit_avx.o
00:02:25.172  mv obj/aes128_cntr_by8_avx.o.tmp obj/aes128_cntr_by8_avx.o
00:02:25.172  mv obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/sha1_mult_avx.o.tmp obj/sha1_mult_avx.o
00:02:25.172  mv obj/mb_mgr_aes_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes_cmac_submit_flush_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_sha_512_flush_avx.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx2.d -MT obj/mb_mgr_hmac_sha_512_flush_avx2.o -o obj/mb_mgr_hmac_sha_512_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_512_flush_avx2.asm
00:02:25.172  mv obj/mb_mgr_hmac_sha_384_flush_avx.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_submit_avx.o.tmp obj/mb_mgr_hmac_submit_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx2.d -MT obj/mb_mgr_hmac_sha_512_submit_avx2.o -o obj/mb_mgr_hmac_sha_512_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_512_submit_avx2.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_avx.o
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx.o
00:02:25.172  mv obj/sha1_mult_avx.o.tmp obj/sha1_mult_avx.o
00:02:25.172  nasm -MD obj/mb_mgr_zuc_submit_flush_avx2.d -MT obj/mb_mgr_zuc_submit_flush_avx2.o -o obj/mb_mgr_zuc_submit_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_zuc_submit_flush_avx2.asm
00:02:25.172  nasm -MD obj/chacha20_avx2.d -MT obj/chacha20_avx2.o -o obj/chacha20_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/chacha20_avx2.asm
00:02:25.172  nasm -MD obj/gcm128_avx_gen4.d -MT obj/gcm128_avx_gen4.o -o obj/gcm128_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm128_avx_gen4.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx.o
00:02:25.172  mv obj/mb_mgr_aes256_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_avx.o
00:02:25.172  mv obj/mb_mgr_hmac_sha_384_submit_avx.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx.o
00:02:25.172  nasm -MD obj/gcm192_avx_gen4.d -MT obj/gcm192_avx_gen4.o -o obj/gcm192_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm192_avx_gen4.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x4_no_aesni.o.tmp obj/aes128_cbc_mac_x4_no_aesni.o
00:02:25.172  nasm -MD obj/gcm256_avx_gen4.d -MT obj/gcm256_avx_gen4.o -o obj/gcm256_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm256_avx_gen4.asm
00:02:25.172  mv obj/mb_mgr_hmac_sha_512_submit_avx.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx.o
00:02:25.172  nasm -MD obj/sha1_x16_avx512.d -MT obj/sha1_x16_avx512.o -o obj/sha1_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha1_x16_avx512.asm
00:02:25.172  nasm -MD obj/sha256_x16_avx512.d -MT obj/sha256_x16_avx512.o -o obj/sha256_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha256_x16_avx512.asm
00:02:25.172  mv obj/aes128_cbc_mac_x4_no_aesni.o.tmp obj/aes128_cbc_mac_x4_no_aesni.o
00:02:25.172  nasm -MD obj/sha512_x8_avx512.d -MT obj/sha512_x8_avx512.o -o obj/sha512_x8_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha512_x8_avx512.asm
00:02:25.172  nasm -MD obj/des_x16_avx512.d -MT obj/des_x16_avx512.o -o obj/des_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/des_x16_avx512.asm
00:02:25.172  ld -r -z ibt -z shstk -o obj/crc32_refl_by8_sse_no_aesni.o.tmp obj/crc32_refl_by8_sse_no_aesni.o
00:02:25.172  nasm -MD obj/cntr_vaes_avx512.d -MT obj/cntr_vaes_avx512.o -o obj/cntr_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/cntr_vaes_avx512.asm
00:02:25.172  nasm -MD obj/cntr_ccm_vaes_avx512.d -MT obj/cntr_ccm_vaes_avx512.o -o obj/cntr_ccm_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/cntr_ccm_vaes_avx512.asm
00:02:25.172  nasm -MD obj/aes_cbc_dec_vaes_avx512.d -MT obj/aes_cbc_dec_vaes_avx512.o -o obj/aes_cbc_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbc_dec_vaes_avx512.asm
00:02:25.172  mv obj/crc32_refl_by8_sse_no_aesni.o.tmp obj/crc32_refl_by8_sse_no_aesni.o
00:02:25.173  nasm -MD obj/aes_cbc_enc_vaes_avx512.d -MT obj/aes_cbc_enc_vaes_avx512.o -o obj/aes_cbc_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbc_enc_vaes_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_avx.o.tmp obj/aes192_cntr_by8_avx.o
00:02:25.173  nasm -MD obj/aes_cbcs_enc_vaes_avx512.d -MT obj/aes_cbcs_enc_vaes_avx512.o -o obj/aes_cbcs_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbcs_enc_vaes_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o
00:02:25.173  nasm -MD obj/aes_cbcs_dec_vaes_avx512.d -MT obj/aes_cbcs_dec_vaes_avx512.o -o obj/aes_cbcs_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbcs_dec_vaes_avx512.asm
00:02:25.173  nasm -MD obj/aes_docsis_dec_avx512.d -MT obj/aes_docsis_dec_avx512.o -o obj/aes_docsis_dec_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_dec_avx512.asm
00:02:25.173  mv obj/aes192_cntr_by8_avx.o.tmp obj/aes192_cntr_by8_avx.o
00:02:25.173  nasm -MD obj/aes_docsis_enc_avx512.d -MT obj/aes_docsis_enc_avx512.o -o obj/aes_docsis_enc_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_enc_avx512.asm
00:02:25.173  mv obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o
00:02:25.173  nasm -MD obj/aes_docsis_dec_vaes_avx512.d -MT obj/aes_docsis_dec_vaes_avx512.o -o obj/aes_docsis_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_dec_vaes_avx512.asm
00:02:25.173  nasm -MD obj/aes_docsis_enc_vaes_avx512.d -MT obj/aes_docsis_enc_vaes_avx512.o -o obj/aes_docsis_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_enc_vaes_avx512.asm
00:02:25.173  nasm -MD obj/zuc_avx512.d -MT obj/zuc_avx512.o -o obj/zuc_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/zuc_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_aes_submit_avx512.d -MT obj/mb_mgr_aes_submit_avx512.o -o obj/mb_mgr_aes_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_submit_avx512.asm
00:02:25.173  mv obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o
00:02:25.173  nasm -MD obj/mb_mgr_aes_flush_avx512.d -MT obj/mb_mgr_aes_flush_avx512.o -o obj/mb_mgr_aes_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_flush_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_aes192_submit_avx512.d -MT obj/mb_mgr_aes192_submit_avx512.o -o obj/mb_mgr_aes192_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes192_submit_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_aes192_flush_avx512.d -MT obj/mb_mgr_aes192_flush_avx512.o -o obj/mb_mgr_aes192_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes192_flush_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx2.o.tmp obj/mb_mgr_hmac_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_aes256_submit_avx512.d -MT obj/mb_mgr_aes256_submit_avx512.o -o obj/mb_mgr_aes256_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_submit_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_aes256_flush_avx512.d -MT obj/mb_mgr_aes256_flush_avx512.o -o obj/mb_mgr_aes256_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_flush_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_avx2.o.tmp obj/mb_mgr_hmac_md5_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_flush_avx512.d -MT obj/mb_mgr_hmac_flush_avx512.o -o obj/mb_mgr_hmac_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_flush_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx2.o
00:02:25.173  mv obj/mb_mgr_hmac_flush_avx2.o.tmp obj/mb_mgr_hmac_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_submit_avx512.d -MT obj/mb_mgr_hmac_submit_avx512.o -o obj/mb_mgr_hmac_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_submit_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx512.d -MT obj/mb_mgr_hmac_sha_224_flush_avx512.o -o obj/mb_mgr_hmac_sha_224_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_224_flush_avx512.asm
00:02:25.173  mv obj/mb_mgr_hmac_md5_flush_avx2.o.tmp obj/mb_mgr_hmac_md5_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx512.d -MT obj/mb_mgr_hmac_sha_224_submit_avx512.o -o obj/mb_mgr_hmac_sha_224_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_224_submit_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx2.o
00:02:25.173  mv obj/mb_mgr_hmac_sha_384_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx2.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o.tmp obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx512.d -MT obj/mb_mgr_hmac_sha_256_flush_avx512.o -o obj/mb_mgr_hmac_sha_256_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_256_flush_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx512.d -MT obj/mb_mgr_hmac_sha_256_submit_avx512.o -o obj/mb_mgr_hmac_sha_256_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_256_submit_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx512.d -MT obj/mb_mgr_hmac_sha_384_flush_avx512.o -o obj/mb_mgr_hmac_sha_384_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_384_flush_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_avx2.o.tmp obj/mb_mgr_hmac_md5_submit_avx2.o
00:02:25.173  mv obj/mb_mgr_hmac_sha_224_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx2.o
00:02:25.173  mv obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o.tmp obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx512.d -MT obj/mb_mgr_hmac_sha_384_submit_avx512.o -o obj/mb_mgr_hmac_sha_384_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_384_submit_avx512.asm
00:02:25.173  mv obj/mb_mgr_hmac_sha_256_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx512.d -MT obj/mb_mgr_hmac_sha_512_flush_avx512.o -o obj/mb_mgr_hmac_sha_512_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_512_flush_avx512.asm
00:02:25.173  mv obj/mb_mgr_hmac_md5_submit_avx2.o.tmp obj/mb_mgr_hmac_md5_submit_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx512.d -MT obj/mb_mgr_hmac_sha_512_submit_avx512.o -o obj/mb_mgr_hmac_sha_512_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_512_submit_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_des_avx512.d -MT obj/mb_mgr_des_avx512.o -o obj/mb_mgr_des_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_des_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cmac_submit_flush_vaes_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x4_no_aesni.o.tmp obj/aes_cbc_enc_128_x4_no_aesni.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x4_no_aesni.o.tmp obj/aes_cbc_enc_192_x4_no_aesni.o
00:02:25.173  mv obj/mb_mgr_hmac_sha_512_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.asm
00:02:25.173  nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx2.o.tmp obj/mb_mgr_hmac_submit_avx2.o
00:02:25.173  mv obj/aes_cbc_enc_128_x4_no_aesni.o.tmp obj/aes_cbc_enc_128_x4_no_aesni.o
00:02:25.173  mv obj/aes_cbc_enc_192_x4_no_aesni.o.tmp obj/aes_cbc_enc_192_x4_no_aesni.o
00:02:25.173  nasm -MD obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/pon_avx.o.tmp obj/pon_avx.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_avx.o.tmp obj/aes256_cntr_by8_avx.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx2.o
00:02:25.173  mv obj/mb_mgr_hmac_sha_224_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx2.o
00:02:25.173  mv obj/mb_mgr_hmac_submit_avx2.o.tmp obj/mb_mgr_hmac_submit_avx2.o
00:02:25.173  nasm -MD obj/mb_mgr_zuc_submit_flush_avx512.d -MT obj/mb_mgr_zuc_submit_flush_avx512.o -o obj/mb_mgr_zuc_submit_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_zuc_submit_flush_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/sha256_oct_avx2.o.tmp obj/sha256_oct_avx2.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/sha512_x4_avx2.o.tmp obj/sha512_x4_avx2.o
00:02:25.173  mv obj/pon_avx.o.tmp obj/pon_avx.o
00:02:25.173  mv obj/aes256_cntr_by8_avx.o.tmp obj/aes256_cntr_by8_avx.o
00:02:25.173  mv obj/mb_mgr_hmac_sha_256_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx2.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx2.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx.o.tmp obj/mb_mgr_zuc_submit_flush_avx.o
00:02:25.173  nasm -MD obj/mb_mgr_zuc_submit_flush_gfni_avx512.d -MT obj/mb_mgr_zuc_submit_flush_gfni_avx512.o -o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_zuc_submit_flush_gfni_avx512.asm
00:02:25.173  mv obj/sha256_oct_avx2.o.tmp obj/sha256_oct_avx2.o
00:02:25.173  mv obj/sha512_x4_avx2.o.tmp obj/sha512_x4_avx2.o
00:02:25.173  nasm -MD obj/chacha20_avx512.d -MT obj/chacha20_avx512.o -o obj/chacha20_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/chacha20_avx512.asm
00:02:25.173  mv obj/mb_mgr_hmac_sha_512_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx2.o
00:02:25.173  mv obj/mb_mgr_zuc_submit_flush_avx.o.tmp obj/mb_mgr_zuc_submit_flush_avx.o
00:02:25.173  nasm -MD obj/poly_avx512.d -MT obj/poly_avx512.o -o obj/poly_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/poly_avx512.asm
00:02:25.173  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx2.o
00:02:25.173  ld -r -z ibt -z shstk -o obj/sha1_x8_avx2.o.tmp obj/sha1_x8_avx2.o
00:02:25.173  nasm -MD obj/poly_fma_avx512.d -MT obj/poly_fma_avx512.o -o obj/poly_fma_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/poly_fma_avx512.asm
00:02:25.174  nasm -MD obj/ethernet_fcs_avx512.d -MT obj/ethernet_fcs_avx512.o -o obj/ethernet_fcs_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/ethernet_fcs_avx512.asm
00:02:25.174  mv obj/mb_mgr_hmac_sha_384_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx2.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_avx512.o.tmp obj/mb_mgr_aes_submit_avx512.o
00:02:25.174  nasm -MD obj/crc16_x25_avx512.d -MT obj/crc16_x25_avx512.o -o obj/crc16_x25_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc16_x25_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/aes_cbcs_enc_vaes_avx512.o.tmp obj/aes_cbcs_enc_vaes_avx512.o
00:02:25.174  mv obj/sha1_x8_avx2.o.tmp obj/sha1_x8_avx2.o
00:02:25.174  nasm -MD obj/crc32_refl_by16_vclmul_avx512.d -MT obj/crc32_refl_by16_vclmul_avx512.o -o obj/crc32_refl_by16_vclmul_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_refl_by16_vclmul_avx512.asm
00:02:25.174  nasm -MD obj/crc32_by16_vclmul_avx512.d -MT obj/crc32_by16_vclmul_avx512.o -o obj/crc32_by16_vclmul_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_by16_vclmul_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_avx512.o.tmp obj/mb_mgr_aes192_submit_avx512.o
00:02:25.174  mv obj/mb_mgr_aes_submit_avx512.o.tmp obj/mb_mgr_aes_submit_avx512.o
00:02:25.174  nasm -MD obj/mb_mgr_aes_cbcs_1_9_submit_avx512.d -MT obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o -o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cbcs_1_9_submit_avx512.asm
00:02:25.174  mv obj/aes_cbcs_enc_vaes_avx512.o.tmp obj/aes_cbcs_enc_vaes_avx512.o
00:02:25.174  nasm -MD obj/mb_mgr_aes_cbcs_1_9_flush_avx512.d -MT obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o -o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cbcs_1_9_flush_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_avx512.o.tmp obj/mb_mgr_aes256_submit_avx512.o
00:02:25.174  mv obj/mb_mgr_aes192_submit_avx512.o.tmp obj/mb_mgr_aes192_submit_avx512.o
00:02:25.174  nasm -MD obj/crc32_sctp_avx512.d -MT obj/crc32_sctp_avx512.o -o obj/crc32_sctp_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_sctp_avx512.asm
00:02:25.174  nasm -MD obj/crc32_lte_avx512.d -MT obj/crc32_lte_avx512.o -o obj/crc32_lte_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_lte_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/ethernet_fcs_avx512.o.tmp obj/ethernet_fcs_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc16_x25_avx512.o.tmp obj/crc16_x25_avx512.o
00:02:25.174  mv obj/mb_mgr_aes256_submit_avx512.o.tmp obj/mb_mgr_aes256_submit_avx512.o
00:02:25.174  nasm -MD obj/crc32_fp_avx512.d -MT obj/crc32_fp_avx512.o -o obj/crc32_fp_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_fp_avx512.asm
00:02:25.174  nasm -MD obj/crc32_iuup_avx512.d -MT obj/crc32_iuup_avx512.o -o obj/crc32_iuup_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_iuup_avx512.asm
00:02:25.174  mv obj/ethernet_fcs_avx512.o.tmp obj/ethernet_fcs_avx512.o
00:02:25.174  mv obj/crc16_x25_avx512.o.tmp obj/crc16_x25_avx512.o
00:02:25.174  nasm -MD obj/crc32_wimax_avx512.d -MT obj/crc32_wimax_avx512.o -o obj/crc32_wimax_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_wimax_avx512.asm
00:02:25.174  nasm -MD obj/gcm128_vaes_avx512.d -MT obj/gcm128_vaes_avx512.o -o obj/gcm128_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm128_vaes_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc32_refl_by16_vclmul_avx512.o.tmp obj/crc32_refl_by16_vclmul_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc32_by16_vclmul_avx512.o.tmp obj/crc32_by16_vclmul_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc32_sctp_avx512.o.tmp obj/crc32_sctp_avx512.o
00:02:25.174  nasm -MD obj/gcm192_vaes_avx512.d -MT obj/gcm192_vaes_avx512.o -o obj/gcm192_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm192_vaes_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx512.o.tmp obj/mb_mgr_hmac_flush_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc32_lte_avx512.o.tmp obj/crc32_lte_avx512.o
00:02:25.174  nasm -MD obj/gcm256_vaes_avx512.d -MT obj/gcm256_vaes_avx512.o -o obj/gcm256_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm256_vaes_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc32_fp_avx512.o.tmp obj/crc32_fp_avx512.o
00:02:25.174  mv obj/crc32_refl_by16_vclmul_avx512.o.tmp obj/crc32_refl_by16_vclmul_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc32_iuup_avx512.o.tmp obj/crc32_iuup_avx512.o
00:02:25.174  mv obj/crc32_by16_vclmul_avx512.o.tmp obj/crc32_by16_vclmul_avx512.o
00:02:25.174  mv obj/crc32_sctp_avx512.o.tmp obj/crc32_sctp_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx512.o
00:02:25.174  mv obj/mb_mgr_hmac_flush_avx512.o.tmp obj/mb_mgr_hmac_flush_avx512.o
00:02:25.174  mv obj/mb_mgr_hmac_sha_384_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx512.o
00:02:25.174  mv obj/crc32_lte_avx512.o.tmp obj/crc32_lte_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/crc32_wimax_avx512.o.tmp obj/crc32_wimax_avx512.o
00:02:25.174  mv obj/crc32_fp_avx512.o.tmp obj/crc32_fp_avx512.o
00:02:25.174  nasm -MD obj/gcm128_avx512.d -MT obj/gcm128_avx512.o -o obj/gcm128_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm128_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx512.o.tmp obj/mb_mgr_hmac_submit_avx512.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx512.o
00:02:25.174  mv obj/crc32_iuup_avx512.o.tmp obj/crc32_iuup_avx512.o
00:02:25.174  mv obj/mb_mgr_hmac_sha_384_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx512.o
00:02:25.174  mv obj/crc32_wimax_avx512.o.tmp obj/crc32_wimax_avx512.o
00:02:25.174  nasm -MD obj/gcm192_avx512.d -MT obj/gcm192_avx512.o -o obj/gcm192_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm192_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx512.o
00:02:25.174  mv obj/mb_mgr_hmac_submit_avx512.o.tmp obj/mb_mgr_hmac_submit_avx512.o
00:02:25.174  mv obj/mb_mgr_hmac_sha_224_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx512.o
00:02:25.174  nasm -MD obj/gcm256_avx512.d -MT obj/gcm256_avx512.o -o obj/gcm256_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm256_avx512.asm
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx512.o
00:02:25.174  gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/mb_mgr_avx.c -o obj/mb_mgr_avx.o
00:02:25.174  mv obj/mb_mgr_hmac_sha_512_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx512.o
00:02:25.174  gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/mb_mgr_avx2.c -o obj/mb_mgr_avx2.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/md5_x4x2_avx.o.tmp obj/md5_x4x2_avx.o
00:02:25.174  gcc -MMD -march=broadwell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx512/mb_mgr_avx512.c -o obj/mb_mgr_avx512.o
00:02:25.174  mv obj/mb_mgr_hmac_sha_256_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx512.o
00:02:25.174  gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/mb_mgr_sse.c -o obj/mb_mgr_sse.o
00:02:25.174  gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/mb_mgr_sse_no_aesni.c -o obj/mb_mgr_sse_no_aesni.o
00:02:25.174  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/alloc.c -o obj/alloc.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/sha1_x16_avx512.o.tmp obj/sha1_x16_avx512.o
00:02:25.174  mv obj/md5_x4x2_avx.o.tmp obj/md5_x4x2_avx.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x4_no_aesni.o.tmp obj/aes_xcbc_mac_128_x4_no_aesni.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx512.o
00:02:25.174  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/aes_xcbc_expand_key.c -o obj/aes_xcbc_expand_key.o
00:02:25.174  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/md5_one_block.c -o obj/md5_one_block.o
00:02:25.174  mv obj/sha1_x16_avx512.o.tmp obj/sha1_x16_avx512.o
00:02:25.174  gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/sha_sse.c -o obj/sha_sse.o
00:02:25.174  mv obj/aes_xcbc_mac_128_x4_no_aesni.o.tmp obj/aes_xcbc_mac_128_x4_no_aesni.o
00:02:25.174  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx512.o
00:02:25.174  mv obj/mb_mgr_hmac_sha_512_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx512.o
00:02:25.175  gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/sha_avx.c -o obj/sha_avx.o
00:02:25.175  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/des_key.c -o obj/des_key.o
00:02:25.175  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/des_basic.c -o obj/des_basic.o
00:02:25.175  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/version.c -o obj/version.o
00:02:25.175  mv obj/mb_mgr_hmac_sha_224_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx512.o
00:02:25.175  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/cpu_feature.c -o obj/cpu_feature.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_avx512.o.tmp obj/mb_mgr_aes_flush_avx512.o
00:02:25.175  gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/aesni_emu.c -o obj/aesni_emu.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx512.o
00:02:25.175  gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/kasumi_avx.c -o obj/kasumi_avx.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/sha512_x8_avx512.o.tmp obj/sha512_x8_avx512.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_avx512.o.tmp obj/mb_mgr_aes192_flush_avx512.o
00:02:25.175  mv obj/mb_mgr_aes_flush_avx512.o.tmp obj/mb_mgr_aes_flush_avx512.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o
00:02:25.175  mv obj/mb_mgr_hmac_sha_256_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx512.o
00:02:25.175  mv obj/sha512_x8_avx512.o.tmp obj/sha512_x8_avx512.o
00:02:25.175  mv obj/mb_mgr_aes192_flush_avx512.o.tmp obj/mb_mgr_aes192_flush_avx512.o
00:02:25.175  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/kasumi_iv.c -o obj/kasumi_iv.o
00:02:25.175  mv obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o
00:02:25.175  gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/kasumi_sse.c -o obj/kasumi_sse.o
00:02:25.175  gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/zuc_sse_top.c -o obj/zuc_sse_top.o
00:02:25.175  gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/zuc_sse_no_aesni_top.c -o obj/zuc_sse_no_aesni_top.o
00:02:25.175  gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/zuc_avx_top.c -o obj/zuc_avx_top.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_des_avx512.o.tmp obj/mb_mgr_des_avx512.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/sha256_x16_avx512.o.tmp obj/sha256_x16_avx512.o
00:02:25.175  mv obj/mb_mgr_des_avx512.o.tmp obj/mb_mgr_des_avx512.o
00:02:25.175  mv obj/sha256_x16_avx512.o.tmp obj/sha256_x16_avx512.o
00:02:25.175  gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/zuc_avx2_top.c -o obj/zuc_avx2_top.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_avx512.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_avx512.o
00:02:25.175  gcc -MMD -march=broadwell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx512/zuc_avx512_top.c -o obj/zuc_avx512_top.o
00:02:25.175  mv obj/mb_mgr_zuc_submit_flush_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_avx512.o
00:02:25.175  mv obj/mb_mgr_zuc_submit_flush_gfni_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_avx512.o
00:02:25.175  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/zuc_iv.c -o obj/zuc_iv.o
00:02:25.175  gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/snow3g_sse.c -o obj/snow3g_sse.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o
00:02:25.175  mv obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o
00:02:25.175  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o
00:02:25.175  gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/snow3g_sse_no_aesni.c -o obj/snow3g_sse_no_aesni.o
00:02:25.175  mv obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o
00:02:25.434  gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/snow3g_avx.c -o obj/snow3g_avx.o
00:02:25.434  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o
00:02:25.434  gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/snow3g_avx2.c -o obj/snow3g_avx2.o
00:02:25.434  mv obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o
00:02:25.434  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/snow3g_tables.c -o obj/snow3g_tables.o
00:02:25.434  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x4_no_aesni.o.tmp obj/aes_cbc_enc_256_x4_no_aesni.o
00:02:25.434  ld -r -z ibt -z shstk -o obj/md5_x8x2_avx2.o.tmp obj/md5_x8x2_avx2.o
00:02:25.434  mv obj/md5_x8x2_avx2.o.tmp obj/md5_x8x2_avx2.o
00:02:25.434  mv obj/aes_cbc_enc_256_x4_no_aesni.o.tmp obj/aes_cbc_enc_256_x4_no_aesni.o
00:02:25.434  ld -r -z ibt -z shstk -o obj/poly_avx512.o.tmp obj/poly_avx512.o
00:02:25.434  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/snow3g_iv.c -o obj/snow3g_iv.o
00:02:25.434  nasm -MD obj/snow_v_sse.d -MT obj/snow_v_sse.o -o obj/snow_v_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/snow_v_sse.asm
00:02:25.434  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_avx512.o.tmp obj/mb_mgr_aes256_flush_avx512.o
00:02:25.434  ld -r -z ibt -z shstk -o obj/poly_fma_avx512.o.tmp obj/poly_fma_avx512.o
00:02:25.434  mv obj/poly_avx512.o.tmp obj/poly_avx512.o
00:02:25.434  nasm -MD obj/snow_v_sse_noaesni.d -MT obj/snow_v_sse_noaesni.o -o obj/snow_v_sse_noaesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/snow_v_sse_noaesni.asm
00:02:25.434  mv obj/mb_mgr_aes256_flush_avx512.o.tmp obj/mb_mgr_aes256_flush_avx512.o
00:02:25.434  mv obj/poly_fma_avx512.o.tmp obj/poly_fma_avx512.o
00:02:25.434  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o
00:02:25.434  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/mb_mgr_auto.c -o obj/mb_mgr_auto.o
00:02:25.434  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/error.c -o obj/error.o
00:02:25.434  mv obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o
00:02:25.435  gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/gcm.c -o obj/gcm.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o
00:02:25.435  mv obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/snow_v_sse.o.tmp obj/snow_v_sse.o
00:02:25.435  mv obj/snow_v_sse.o.tmp obj/snow_v_sse.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o
00:02:25.435  mv obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x4_no_aesni.o.tmp obj/aes256_cbc_mac_x4_no_aesni.o
00:02:25.435  mv obj/aes256_cbc_mac_x4_no_aesni.o.tmp obj/aes256_cbc_mac_x4_no_aesni.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes192_cbc_dec_by4_sse_no_aesni.o
00:02:25.435  mv obj/aes192_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes192_cbc_dec_by4_sse_no_aesni.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbc_dec_by4_sse_no_aesni.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/snow_v_sse_noaesni.o.tmp obj/snow_v_sse_noaesni.o
00:02:25.435  mv obj/aes128_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbc_dec_by4_sse_no_aesni.o
00:02:25.435  mv obj/snow_v_sse_noaesni.o.tmp obj/snow_v_sse_noaesni.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/aes_cbcs_dec_vaes_avx512.o.tmp obj/aes_cbcs_dec_vaes_avx512.o
00:02:25.435  mv obj/aes_cbcs_dec_vaes_avx512.o.tmp obj/aes_cbcs_dec_vaes_avx512.o
00:02:25.435  ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx2.o.tmp obj/mb_mgr_zuc_submit_flush_avx2.o
00:02:25.435  mv obj/mb_mgr_zuc_submit_flush_avx2.o.tmp obj/mb_mgr_zuc_submit_flush_avx2.o
00:02:25.693  ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o
00:02:25.693  mv obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o
00:02:25.693  ld -r -z ibt -z shstk -o obj/aes_cbc_enc_vaes_avx512.o.tmp obj/aes_cbc_enc_vaes_avx512.o
00:02:25.693  mv obj/aes_cbc_enc_vaes_avx512.o.tmp obj/aes_cbc_enc_vaes_avx512.o
00:02:25.693  ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes256_cbc_dec_by4_sse_no_aesni.o
00:02:25.693  ld -r -z ibt -z shstk -o obj/aes_docsis_dec_avx512.o.tmp obj/aes_docsis_dec_avx512.o
00:02:25.693  mv obj/aes256_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes256_cbc_dec_by4_sse_no_aesni.o
00:02:25.693  mv obj/aes_docsis_dec_avx512.o.tmp obj/aes_docsis_dec_avx512.o
00:02:25.952  ld -r -z ibt -z shstk -o obj/zuc_common.o.tmp obj/zuc_common.o
00:02:25.952  mv obj/zuc_common.o.tmp obj/zuc_common.o
00:02:25.952  ld -r -z ibt -z shstk -o obj/aes_docsis_enc_avx512.o.tmp obj/aes_docsis_enc_avx512.o
00:02:25.952  mv obj/aes_docsis_enc_avx512.o.tmp obj/aes_docsis_enc_avx512.o
00:02:26.210  ld -r -z ibt -z shstk -o obj/chacha20_avx2.o.tmp obj/chacha20_avx2.o
00:02:26.210  mv obj/chacha20_avx2.o.tmp obj/chacha20_avx2.o
00:02:26.210  ld -r -z ibt -z shstk -o obj/zuc_sse_gfni.o.tmp obj/zuc_sse_gfni.o
00:02:26.210  mv obj/zuc_sse_gfni.o.tmp obj/zuc_sse_gfni.o
00:02:26.210  ld -r -z ibt -z shstk -o obj/chacha20_avx.o.tmp obj/chacha20_avx.o
00:02:26.210  mv obj/chacha20_avx.o.tmp obj/chacha20_avx.o
00:02:26.468  ld -r -z ibt -z shstk -o obj/pon_sse_no_aesni.o.tmp obj/pon_sse_no_aesni.o
00:02:26.468  mv obj/pon_sse_no_aesni.o.tmp obj/pon_sse_no_aesni.o
00:02:26.468  ld -r -z ibt -z shstk -o obj/aes_docsis_enc_vaes_avx512.o.tmp obj/aes_docsis_enc_vaes_avx512.o
00:02:26.468  mv obj/aes_docsis_enc_vaes_avx512.o.tmp obj/aes_docsis_enc_vaes_avx512.o
00:02:26.468  ld -r -z ibt -z shstk -o obj/zuc_sse.o.tmp obj/zuc_sse.o
00:02:26.468  mv obj/zuc_sse.o.tmp obj/zuc_sse.o
00:02:26.469  ld -r -z ibt -z shstk -o obj/aes_cbc_dec_vaes_avx512.o.tmp obj/aes_cbc_dec_vaes_avx512.o
00:02:26.469  mv obj/aes_cbc_dec_vaes_avx512.o.tmp obj/aes_cbc_dec_vaes_avx512.o
00:02:26.727  ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes128_cntr_ccm_by8_sse_no_aesni.o
00:02:26.727  mv obj/aes128_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes128_cntr_ccm_by8_sse_no_aesni.o
00:02:26.727  ld -r -z ibt -z shstk -o obj/gcm128_sse.o.tmp obj/gcm128_sse.o
00:02:26.727  mv obj/gcm128_sse.o.tmp obj/gcm128_sse.o
00:02:26.985  ld -r -z ibt -z shstk -o obj/gcm192_sse.o.tmp obj/gcm192_sse.o
00:02:26.985  mv obj/gcm192_sse.o.tmp obj/gcm192_sse.o
00:02:26.985  ld -r -z ibt -z shstk -o obj/gcm128_avx_gen2.o.tmp obj/gcm128_avx_gen2.o
00:02:26.985  mv obj/gcm128_avx_gen2.o.tmp obj/gcm128_avx_gen2.o
00:02:26.985  ld -r -z ibt -z shstk -o obj/gcm192_avx_gen2.o.tmp obj/gcm192_avx_gen2.o
00:02:26.985  mv obj/gcm192_avx_gen2.o.tmp obj/gcm192_avx_gen2.o
00:02:27.244  ld -r -z ibt -z shstk -o obj/zuc_sse_no_aesni.o.tmp obj/zuc_sse_no_aesni.o
00:02:27.244  mv obj/zuc_sse_no_aesni.o.tmp obj/zuc_sse_no_aesni.o
00:02:27.244  ld -r -z ibt -z shstk -o obj/zuc_avx.o.tmp obj/zuc_avx.o
00:02:27.244  mv obj/zuc_avx.o.tmp obj/zuc_avx.o
00:02:27.503  ld -r -z ibt -z shstk -o obj/gcm256_sse.o.tmp obj/gcm256_sse.o
00:02:27.503  mv obj/gcm256_sse.o.tmp obj/gcm256_sse.o
00:02:27.761  ld -r -z ibt -z shstk -o obj/chacha20_avx512.o.tmp obj/chacha20_avx512.o
00:02:27.761  mv obj/chacha20_avx512.o.tmp obj/chacha20_avx512.o
00:02:27.761  ld -r -z ibt -z shstk -o obj/gcm192_avx512.o.tmp obj/gcm192_avx512.o
00:02:27.761  mv obj/gcm192_avx512.o.tmp obj/gcm192_avx512.o
00:02:28.020  ld -r -z ibt -z shstk -o obj/gcm256_avx_gen2.o.tmp obj/gcm256_avx_gen2.o
00:02:28.020  mv obj/gcm256_avx_gen2.o.tmp obj/gcm256_avx_gen2.o
00:02:28.020  ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes256_cntr_ccm_by8_sse_no_aesni.o
00:02:28.020  mv obj/aes256_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes256_cntr_ccm_by8_sse_no_aesni.o
00:02:28.020  ld -r -z ibt -z shstk -o obj/gcm256_avx512.o.tmp obj/gcm256_avx512.o
00:02:28.020  mv obj/gcm256_avx512.o.tmp obj/gcm256_avx512.o
00:02:28.020  ld -r -z ibt -z shstk -o obj/cntr_ccm_vaes_avx512.o.tmp obj/cntr_ccm_vaes_avx512.o
00:02:28.020  mv obj/cntr_ccm_vaes_avx512.o.tmp obj/cntr_ccm_vaes_avx512.o
00:02:28.020  ld -r -z ibt -z shstk -o obj/aes_docsis_dec_vaes_avx512.o.tmp obj/aes_docsis_dec_vaes_avx512.o
00:02:28.020  mv obj/aes_docsis_dec_vaes_avx512.o.tmp obj/aes_docsis_dec_vaes_avx512.o
00:02:28.279  ld -r -z ibt -z shstk -o obj/gcm128_avx512.o.tmp obj/gcm128_avx512.o
00:02:28.279  mv obj/gcm128_avx512.o.tmp obj/gcm128_avx512.o
00:02:28.279  ld -r -z ibt -z shstk -o obj/zuc_avx512.o.tmp obj/zuc_avx512.o
00:02:28.279  mv obj/zuc_avx512.o.tmp obj/zuc_avx512.o
00:02:28.847  ld -r -z ibt -z shstk -o obj/gcm192_avx_gen4.o.tmp obj/gcm192_avx_gen4.o
00:02:28.847  mv obj/gcm192_avx_gen4.o.tmp obj/gcm192_avx_gen4.o
00:02:29.106  ld -r -z ibt -z shstk -o obj/aes_ecb_by4_sse_no_aesni.o.tmp obj/aes_ecb_by4_sse_no_aesni.o
00:02:29.106  mv obj/aes_ecb_by4_sse_no_aesni.o.tmp obj/aes_ecb_by4_sse_no_aesni.o
00:02:29.106  ld -r -z ibt -z shstk -o obj/gcm256_avx_gen4.o.tmp obj/gcm256_avx_gen4.o
00:02:29.106  mv obj/gcm256_avx_gen4.o.tmp obj/gcm256_avx_gen4.o
00:02:29.106  ld -r -z ibt -z shstk -o obj/gcm128_avx_gen4.o.tmp obj/gcm128_avx_gen4.o
00:02:29.106  mv obj/gcm128_avx_gen4.o.tmp obj/gcm128_avx_gen4.o
00:02:29.364  ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_sse_no_aesni.o.tmp obj/aes128_cntr_by8_sse_no_aesni.o
00:02:29.364  mv obj/aes128_cntr_by8_sse_no_aesni.o.tmp obj/aes128_cntr_by8_sse_no_aesni.o
00:02:29.931  ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_sse_no_aesni.o.tmp obj/aes192_cntr_by8_sse_no_aesni.o
00:02:29.931  mv obj/aes192_cntr_by8_sse_no_aesni.o.tmp obj/aes192_cntr_by8_sse_no_aesni.o
00:02:30.498  ld -r -z ibt -z shstk -o obj/des_x16_avx512.o.tmp obj/des_x16_avx512.o
00:02:30.498  mv obj/des_x16_avx512.o.tmp obj/des_x16_avx512.o
00:02:31.064  ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_sse_no_aesni.o.tmp obj/aes256_cntr_by8_sse_no_aesni.o
00:02:31.064  mv obj/aes256_cntr_by8_sse_no_aesni.o.tmp obj/aes256_cntr_by8_sse_no_aesni.o
00:02:31.323  ld -r -z ibt -z shstk -o obj/chacha20_sse.o.tmp obj/chacha20_sse.o
00:02:31.323  mv obj/chacha20_sse.o.tmp obj/chacha20_sse.o
00:02:32.700  ld -r -z ibt -z shstk -o obj/zuc_avx2.o.tmp obj/zuc_avx2.o
00:02:32.700  mv obj/zuc_avx2.o.tmp obj/zuc_avx2.o
00:02:35.236  ld -r -z ibt -z shstk -o obj/gcm128_vaes_avx512.o.tmp obj/gcm128_vaes_avx512.o
00:02:35.236  mv obj/gcm128_vaes_avx512.o.tmp obj/gcm128_vaes_avx512.o
00:02:35.494  ld -r -z ibt -z shstk -o obj/gcm192_vaes_avx512.o.tmp obj/gcm192_vaes_avx512.o
00:02:35.494  mv obj/gcm192_vaes_avx512.o.tmp obj/gcm192_vaes_avx512.o
00:02:37.398  ld -r -z ibt -z shstk -o obj/gcm256_vaes_avx512.o.tmp obj/gcm256_vaes_avx512.o
00:02:37.398  mv obj/gcm256_vaes_avx512.o.tmp obj/gcm256_vaes_avx512.o
00:02:43.969  ld -r -z ibt -z shstk -o obj/cntr_vaes_avx512.o.tmp obj/cntr_vaes_avx512.o
00:02:43.969  mv obj/cntr_vaes_avx512.o.tmp obj/cntr_vaes_avx512.o
00:03:40.203  ld -r -z ibt -z shstk -o obj/gcm128_sse_no_aesni.o.tmp obj/gcm128_sse_no_aesni.o
00:03:40.203  mv obj/gcm128_sse_no_aesni.o.tmp obj/gcm128_sse_no_aesni.o
00:03:40.461  ld -r -z ibt -z shstk -o obj/gcm192_sse_no_aesni.o.tmp obj/gcm192_sse_no_aesni.o
00:03:40.461  mv obj/gcm192_sse_no_aesni.o.tmp obj/gcm192_sse_no_aesni.o
00:03:50.447  ld -r -z ibt -z shstk -o obj/gcm256_sse_no_aesni.o.tmp obj/gcm256_sse_no_aesni.o
00:03:50.447  mv obj/gcm256_sse_no_aesni.o.tmp obj/gcm256_sse_no_aesni.o
00:03:50.448  gcc -shared -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -Wl,-soname,libIPSec_MB.so.1 -o libIPSec_MB.so.1.0.0 obj/aes_keyexp_128.o obj/aes_keyexp_192.o obj/aes_keyexp_256.o obj/aes_cmac_subkey_gen.o obj/save_xmms.o obj/clear_regs_mem_fns.o obj/const.o obj/aes128_ecbenc_x3.o obj/zuc_common.o obj/wireless_common.o obj/constant_lookup.o obj/crc32_refl_const.o obj/crc32_const.o obj/poly1305.o obj/chacha20_poly1305.o obj/aes128_cbc_dec_by4_sse_no_aesni.o obj/aes192_cbc_dec_by4_sse_no_aesni.o obj/aes256_cbc_dec_by4_sse_no_aesni.o obj/aes_cbc_enc_128_x4_no_aesni.o obj/aes_cbc_enc_192_x4_no_aesni.o obj/aes_cbc_enc_256_x4_no_aesni.o obj/aes128_cntr_by8_sse_no_aesni.o obj/aes192_cntr_by8_sse_no_aesni.o obj/aes256_cntr_by8_sse_no_aesni.o obj/aes_ecb_by4_sse_no_aesni.o obj/aes128_cntr_ccm_by8_sse_no_aesni.o obj/aes256_cntr_ccm_by8_sse_no_aesni.o obj/pon_sse_no_aesni.o obj/zuc_sse_no_aesni.o obj/aes_cfb_sse_no_aesni.o obj/aes128_cbc_mac_x4_no_aesni.o obj/aes256_cbc_mac_x4_no_aesni.o obj/aes_xcbc_mac_128_x4_no_aesni.o obj/mb_mgr_aes_flush_sse_no_aesni.o obj/mb_mgr_aes_submit_sse_no_aesni.o obj/mb_mgr_aes192_flush_sse_no_aesni.o obj/mb_mgr_aes192_submit_sse_no_aesni.o obj/mb_mgr_aes256_flush_sse_no_aesni.o obj/mb_mgr_aes256_submit_sse_no_aesni.o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o obj/ethernet_fcs_sse_no_aesni.o obj/crc16_x25_sse_no_aesni.o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o obj/crc32_refl_by8_sse_no_aesni.o obj/crc32_by8_sse_no_aesni.o obj/crc32_sctp_sse_no_aesni.o obj/crc32_lte_sse_no_aesni.o obj/crc32_fp_sse_no_aesni.o obj/crc32_iuup_sse_no_aesni.o obj/crc32_wimax_sse_no_aesni.o obj/gcm128_sse_no_aesni.o obj/gcm192_sse_no_aesni.o obj/gcm256_sse_no_aesni.o obj/aes128_cbc_dec_by4_sse.o obj/aes128_cbc_dec_by8_sse.o obj/aes192_cbc_dec_by4_sse.o obj/aes192_cbc_dec_by8_sse.o obj/aes256_cbc_dec_by4_sse.o obj/aes256_cbc_dec_by8_sse.o obj/aes_cbc_enc_128_x4.o obj/aes_cbc_enc_192_x4.o obj/aes_cbc_enc_256_x4.o obj/aes_cbc_enc_128_x8_sse.o obj/aes_cbc_enc_192_x8_sse.o obj/aes_cbc_enc_256_x8_sse.o obj/pon_sse.o obj/aes128_cntr_by8_sse.o obj/aes192_cntr_by8_sse.o obj/aes256_cntr_by8_sse.o obj/aes_ecb_by4_sse.o obj/aes128_cntr_ccm_by8_sse.o obj/aes256_cntr_ccm_by8_sse.o obj/aes_cfb_sse.o obj/aes128_cbc_mac_x4.o obj/aes256_cbc_mac_x4.o obj/aes128_cbc_mac_x8_sse.o obj/aes256_cbc_mac_x8_sse.o obj/aes_xcbc_mac_128_x4.o obj/md5_x4x2_sse.o obj/sha1_mult_sse.o obj/sha1_one_block_sse.o obj/sha224_one_block_sse.o obj/sha256_one_block_sse.o obj/sha384_one_block_sse.o obj/sha512_one_block_sse.o obj/sha512_x2_sse.o obj/sha_256_mult_sse.o obj/sha1_ni_x2_sse.o obj/sha256_ni_x2_sse.o obj/zuc_sse.o obj/zuc_sse_gfni.o obj/mb_mgr_aes_flush_sse.o obj/mb_mgr_aes_submit_sse.o obj/mb_mgr_aes192_flush_sse.o obj/mb_mgr_aes192_submit_sse.o obj/mb_mgr_aes256_flush_sse.o obj/mb_mgr_aes256_submit_sse.o obj/mb_mgr_aes_flush_sse_x8.o obj/mb_mgr_aes_submit_sse_x8.o obj/mb_mgr_aes192_flush_sse_x8.o obj/mb_mgr_aes192_submit_sse_x8.o obj/mb_mgr_aes256_flush_sse_x8.o obj/mb_mgr_aes256_submit_sse_x8.o obj/mb_mgr_aes_cmac_submit_flush_sse.o obj/mb_mgr_aes256_cmac_submit_flush_sse.o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o obj/mb_mgr_aes_xcbc_flush_sse.o obj/mb_mgr_aes_xcbc_submit_sse.o obj/mb_mgr_hmac_md5_flush_sse.o obj/mb_mgr_hmac_md5_submit_sse.o obj/mb_mgr_hmac_flush_sse.o obj/mb_mgr_hmac_submit_sse.o obj/mb_mgr_hmac_sha_224_flush_sse.o obj/mb_mgr_hmac_sha_224_submit_sse.o obj/mb_mgr_hmac_sha_256_flush_sse.o obj/mb_mgr_hmac_sha_256_submit_sse.o obj/mb_mgr_hmac_sha_384_flush_sse.o obj/mb_mgr_hmac_sha_384_submit_sse.o obj/mb_mgr_hmac_sha_512_flush_sse.o obj/mb_mgr_hmac_sha_512_submit_sse.o obj/mb_mgr_hmac_flush_ni_sse.o obj/mb_mgr_hmac_submit_ni_sse.o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o obj/mb_mgr_zuc_submit_flush_sse.o obj/mb_mgr_zuc_submit_flush_gfni_sse.o obj/ethernet_fcs_sse.o obj/crc16_x25_sse.o obj/crc32_sctp_sse.o obj/aes_cbcs_1_9_enc_128_x4.o obj/aes128_cbcs_1_9_dec_by4_sse.o obj/crc32_refl_by8_sse.o obj/crc32_by8_sse.o obj/crc32_lte_sse.o obj/crc32_fp_sse.o obj/crc32_iuup_sse.o obj/crc32_wimax_sse.o obj/chacha20_sse.o obj/memcpy_sse.o obj/gcm128_sse.o obj/gcm192_sse.o obj/gcm256_sse.o obj/aes_cbc_enc_128_x8.o obj/aes_cbc_enc_192_x8.o obj/aes_cbc_enc_256_x8.o obj/aes128_cbc_dec_by8_avx.o obj/aes192_cbc_dec_by8_avx.o obj/aes256_cbc_dec_by8_avx.o obj/pon_avx.o obj/aes128_cntr_by8_avx.o obj/aes192_cntr_by8_avx.o obj/aes256_cntr_by8_avx.o obj/aes128_cntr_ccm_by8_avx.o obj/aes256_cntr_ccm_by8_avx.o obj/aes_ecb_by4_avx.o obj/aes_cfb_avx.o obj/aes128_cbc_mac_x8.o obj/aes256_cbc_mac_x8.o obj/aes_xcbc_mac_128_x8.o obj/md5_x4x2_avx.o obj/sha1_mult_avx.o obj/sha1_one_block_avx.o obj/sha224_one_block_avx.o obj/sha256_one_block_avx.o obj/sha_256_mult_avx.o obj/sha384_one_block_avx.o obj/sha512_one_block_avx.o obj/sha512_x2_avx.o obj/zuc_avx.o obj/mb_mgr_aes_flush_avx.o obj/mb_mgr_aes_submit_avx.o obj/mb_mgr_aes192_flush_avx.o obj/mb_mgr_aes192_submit_avx.o obj/mb_mgr_aes256_flush_avx.o obj/mb_mgr_aes256_submit_avx.o obj/mb_mgr_aes_cmac_submit_flush_avx.o obj/mb_mgr_aes256_cmac_submit_flush_avx.o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o obj/mb_mgr_aes_xcbc_flush_avx.o obj/mb_mgr_aes_xcbc_submit_avx.o obj/mb_mgr_hmac_md5_flush_avx.o obj/mb_mgr_hmac_md5_submit_avx.o obj/mb_mgr_hmac_flush_avx.o obj/mb_mgr_hmac_submit_avx.o obj/mb_mgr_hmac_sha_224_flush_avx.o obj/mb_mgr_hmac_sha_224_submit_avx.o obj/mb_mgr_hmac_sha_256_flush_avx.o obj/mb_mgr_hmac_sha_256_submit_avx.o obj/mb_mgr_hmac_sha_384_flush_avx.o obj/mb_mgr_hmac_sha_384_submit_avx.o obj/mb_mgr_hmac_sha_512_flush_avx.o obj/mb_mgr_hmac_sha_512_submit_avx.o obj/mb_mgr_zuc_submit_flush_avx.o obj/ethernet_fcs_avx.o obj/crc16_x25_avx.o obj/aes_cbcs_1_9_enc_128_x8.o obj/aes128_cbcs_1_9_dec_by8_avx.o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o obj/crc32_refl_by8_avx.o obj/crc32_by8_avx.o obj/crc32_sctp_avx.o obj/crc32_lte_avx.o obj/crc32_fp_avx.o obj/crc32_iuup_avx.o obj/crc32_wimax_avx.o obj/chacha20_avx.o obj/memcpy_avx.o obj/gcm128_avx_gen2.o obj/gcm192_avx_gen2.o obj/gcm256_avx_gen2.o obj/md5_x8x2_avx2.o obj/sha1_x8_avx2.o obj/sha256_oct_avx2.o obj/sha512_x4_avx2.o obj/zuc_avx2.o obj/mb_mgr_hmac_md5_flush_avx2.o obj/mb_mgr_hmac_md5_submit_avx2.o obj/mb_mgr_hmac_flush_avx2.o obj/mb_mgr_hmac_submit_avx2.o obj/mb_mgr_hmac_sha_224_flush_avx2.o obj/mb_mgr_hmac_sha_224_submit_avx2.o obj/mb_mgr_hmac_sha_256_flush_avx2.o obj/mb_mgr_hmac_sha_256_submit_avx2.o obj/mb_mgr_hmac_sha_384_flush_avx2.o obj/mb_mgr_hmac_sha_384_submit_avx2.o obj/mb_mgr_hmac_sha_512_flush_avx2.o obj/mb_mgr_hmac_sha_512_submit_avx2.o obj/mb_mgr_zuc_submit_flush_avx2.o obj/chacha20_avx2.o obj/gcm128_avx_gen4.o obj/gcm192_avx_gen4.o obj/gcm256_avx_gen4.o obj/sha1_x16_avx512.o obj/sha256_x16_avx512.o obj/sha512_x8_avx512.o obj/des_x16_avx512.o obj/cntr_vaes_avx512.o obj/cntr_ccm_vaes_avx512.o obj/aes_cbc_dec_vaes_avx512.o obj/aes_cbc_enc_vaes_avx512.o obj/aes_cbcs_enc_vaes_avx512.o obj/aes_cbcs_dec_vaes_avx512.o obj/aes_docsis_dec_avx512.o obj/aes_docsis_enc_avx512.o obj/aes_docsis_dec_vaes_avx512.o obj/aes_docsis_enc_vaes_avx512.o obj/zuc_avx512.o obj/mb_mgr_aes_submit_avx512.o obj/mb_mgr_aes_flush_avx512.o obj/mb_mgr_aes192_submit_avx512.o obj/mb_mgr_aes192_flush_avx512.o obj/mb_mgr_aes256_submit_avx512.o obj/mb_mgr_aes256_flush_avx512.o obj/mb_mgr_hmac_flush_avx512.o obj/mb_mgr_hmac_submit_avx512.o obj/mb_mgr_hmac_sha_224_flush_avx512.o obj/mb_mgr_hmac_sha_224_submit_avx512.o obj/mb_mgr_hmac_sha_256_flush_avx512.o obj/mb_mgr_hmac_sha_256_submit_avx512.o obj/mb_mgr_hmac_sha_384_flush_avx512.o obj/mb_mgr_hmac_sha_384_submit_avx512.o obj/mb_mgr_hmac_sha_512_flush_avx512.o obj/mb_mgr_hmac_sha_512_submit_avx512.o obj/mb_mgr_des_avx512.o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o obj/mb_mgr_zuc_submit_flush_avx512.o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o obj/chacha20_avx512.o obj/poly_avx512.o obj/poly_fma_avx512.o obj/ethernet_fcs_avx512.o obj/crc16_x25_avx512.o obj/crc32_refl_by16_vclmul_avx512.o obj/crc32_by16_vclmul_avx512.o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o obj/crc32_sctp_avx512.o obj/crc32_lte_avx512.o obj/crc32_fp_avx512.o obj/crc32_iuup_avx512.o obj/crc32_wimax_avx512.o obj/gcm128_vaes_avx512.o obj/gcm192_vaes_avx512.o obj/gcm256_vaes_avx512.o obj/gcm128_avx512.o obj/gcm192_avx512.o obj/gcm256_avx512.o obj/mb_mgr_avx.o obj/mb_mgr_avx2.o obj/mb_mgr_avx512.o obj/mb_mgr_sse.o obj/mb_mgr_sse_no_aesni.o obj/alloc.o obj/aes_xcbc_expand_key.o obj/md5_one_block.o obj/sha_sse.o obj/sha_avx.o obj/des_key.o obj/des_basic.o obj/version.o obj/cpu_feature.o obj/aesni_emu.o obj/kasumi_avx.o obj/kasumi_iv.o obj/kasumi_sse.o obj/zuc_sse_top.o obj/zuc_sse_no_aesni_top.o obj/zuc_avx_top.o obj/zuc_avx2_top.o obj/zuc_avx512_top.o obj/zuc_iv.o obj/snow3g_sse.o obj/snow3g_sse_no_aesni.o obj/snow3g_avx.o obj/snow3g_avx2.o obj/snow3g_tables.o obj/snow3g_iv.o obj/snow_v_sse.o obj/snow_v_sse_noaesni.o obj/mb_mgr_auto.o obj/error.o obj/gcm.o -lc
00:03:50.448  ln -f -s libIPSec_MB.so.1.0.0 ./libIPSec_MB.so.1
00:03:50.448  ln -f -s libIPSec_MB.so.1 ./libIPSec_MB.so
00:03:50.448  make[1]: Leaving directory '/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib'
00:03:50.448  make -C test
00:03:50.448  make[1]: Entering directory '/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/test'
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o main.o main.c
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o gcm_test.o gcm_test.c
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o ctr_test.o ctr_test.c
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o customop_test.o customop_test.c
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o des_test.o des_test.c
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o ccm_test.o ccm_test.c
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o cmac_test.o cmac_test.c
00:03:50.448  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o utils.o utils.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o hmac_sha1_test.o hmac_sha1_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o hmac_sha256_sha512_test.o hmac_sha256_sha512_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o hmac_md5_test.o hmac_md5_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o aes_test.o aes_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o sha_test.o sha_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o chained_test.o chained_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o api_test.o api_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o pon_test.o pon_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o ecb_test.o ecb_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o zuc_test.o zuc_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o kasumi_test.o kasumi_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o snow3g_test.o snow3g_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o direct_api_test.o direct_api_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o clear_mem_test.o clear_mem_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o hec_test.o hec_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o xcbc_test.o xcbc_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o aes_cbcs_test.o aes_cbcs_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o crc_test.o crc_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o chacha_test.o chacha_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o poly1305_test.o poly1305_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o chacha20_poly1305_test.o chacha20_poly1305_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o null_test.o null_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o snow_v_test.o snow_v_test.c
00:03:50.449  gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3   -c -o ipsec_xvalid.o ipsec_xvalid.c
00:03:50.449  nasm -MD misc.d -MT misc.o -o misc.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ misc.asm
00:03:50.449  ld -r -z ibt -z shstk -o misc.o.tmp misc.o
00:03:50.449  mv misc.o.tmp misc.o
00:03:50.449  utils.c:166:32: warning: argument 2 of type ‘uint8_t[6]’ {aka ‘unsigned char[6]’} with mismatched bound [-Warray-parameter=]
00:03:50.449    166 |                        uint8_t arch_support[IMB_ARCH_NUM],
00:03:50.449        |                        ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:50.449  In file included from utils.c:35:
00:03:50.449  utils.h:39:54: note: previously declared as ‘uint8_t *’ {aka ‘unsigned char *’}
00:03:50.449     39 | int update_flags_and_archs(const char *arg, uint8_t *arch_support,
00:03:50.449        |                                             ~~~~~~~~~^~~~~~~~~~~~
00:03:50.449  utils.c:207:21: warning: argument 1 of type ‘uint8_t[6]’ {aka ‘unsigned char[6]’} with mismatched bound [-Warray-parameter=]
00:03:50.449    207 | detect_arch(uint8_t arch_support[IMB_ARCH_NUM])
00:03:50.449        |             ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:50.449  utils.h:41:26: note: previously declared as ‘uint8_t *’ {aka ‘unsigned char *’}
00:03:50.449     41 | int detect_arch(uint8_t *arch_support);
00:03:50.449        |                 ~~~~~~~~~^~~~~~~~~~~~
00:03:50.449  In file included from null_test.c:33:
00:03:50.449  null_test.c: In function ‘test_null_hash’:
00:03:50.449  ../lib/intel-ipsec-mb.h:1235:10: warning: ‘cipher_key’ may be used uninitialized [-Wmaybe-uninitialized]
00:03:50.449   1235 |         ((_mgr)->keyexp_128((_raw), (_enc), (_dec)))
00:03:50.449        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:50.449  null_test.c:62:9: note: in expansion of macro ‘IMB_AES_KEYEXP_128’
00:03:50.449     62 |         IMB_AES_KEYEXP_128(mb_mgr, cipher_key, expkey, dust);
00:03:50.449        |         ^~~~~~~~~~~~~~~~~~
00:03:50.449  ../lib/intel-ipsec-mb.h:1235:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, void *, void *)’
00:03:50.449   1235 |         ((_mgr)->keyexp_128((_raw), (_enc), (_dec)))
00:03:50.449        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:50.449  null_test.c:62:9: note: in expansion of macro ‘IMB_AES_KEYEXP_128’
00:03:50.449     62 |         IMB_AES_KEYEXP_128(mb_mgr, cipher_key, expkey, dust);
00:03:50.449        |         ^~~~~~~~~~~~~~~~~~
00:03:50.449  null_test.c:47:33: note: ‘cipher_key’ declared here
00:03:50.449     47 |         DECLARE_ALIGNED(uint8_t cipher_key[16], 16);
00:03:50.449        |                                 ^~~~~~~~~~
00:03:50.450  ../lib/intel-ipsec-mb.h:51:9: note: in definition of macro ‘DECLARE_ALIGNED’
00:03:50.450     51 |         decl __attribute__((aligned(alignval)))
00:03:50.450        |         ^~~~
00:03:51.438  gcc -fPIE -z noexecstack -z relro -z now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib main.o gcm_test.o ctr_test.o customop_test.o des_test.o ccm_test.o cmac_test.o utils.o hmac_sha1_test.o hmac_sha256_sha512_test.o hmac_md5_test.o aes_test.o sha_test.o chained_test.o api_test.o pon_test.o ecb_test.o zuc_test.o kasumi_test.o snow3g_test.o direct_api_test.o clear_mem_test.o hec_test.o xcbc_test.o aes_cbcs_test.o crc_test.o chacha_test.o poly1305_test.o chacha20_poly1305_test.o null_test.o snow_v_test.o -lIPSec_MB -o ipsec_MB_testapp
00:03:51.438  gcc -fPIE -z noexecstack -z relro -z now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib ipsec_xvalid.o utils.o misc.o -lIPSec_MB -o ipsec_xvalid_test
00:03:51.438  make[1]: Leaving directory '/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/test'
00:03:51.438  make -C perf
00:03:51.438  make[1]: Entering directory '/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/perf'
00:03:51.438  gcc -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053  -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -pthread -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -fPIE -fstack-protector -D_FORTIFY_SOURCE=2   -c -o ipsec_perf.o ipsec_perf.c
00:03:51.438  gcc -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053  -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -pthread -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -fPIE -fstack-protector -D_FORTIFY_SOURCE=2   -c -o msr.o msr.c
00:03:51.438  nasm -MD misc.d -MT misc.o -o misc.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ misc.asm
00:03:51.438  ld -r -z ibt -z shstk -o misc.o.tmp misc.o
00:03:51.438  mv misc.o.tmp misc.o
00:03:52.005  In file included from ipsec_perf.c:59:
00:03:52.005  ipsec_perf.c: In function ‘do_test_gcm’:
00:03:52.005  ../lib/intel-ipsec-mb.h:1382:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized]
00:03:52.005   1382 |         ((_mgr)->gcm128_pre((_key_in), (_key_exp)))
00:03:52.005        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:52.005  ipsec_perf.c:1937:17: note: in expansion of macro ‘IMB_AES128_GCM_PRE’
00:03:52.005   1937 |                 IMB_AES128_GCM_PRE(mb_mgr, key, &gdata_key);
00:03:52.005        |                 ^~~~~~~~~~~~~~~~~~
00:03:52.005  ../lib/intel-ipsec-mb.h:1382:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’
00:03:52.005   1382 |         ((_mgr)->gcm128_pre((_key_in), (_key_exp)))
00:03:52.005        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:52.005  ipsec_perf.c:1937:17: note: in expansion of macro ‘IMB_AES128_GCM_PRE’
00:03:52.005   1937 |                 IMB_AES128_GCM_PRE(mb_mgr, key, &gdata_key);
00:03:52.005        |                 ^~~~~~~~~~~~~~~~~~
00:03:52.005  ../lib/intel-ipsec-mb.h:1384:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized]
00:03:52.005   1384 |         ((_mgr)->gcm192_pre((_key_in), (_key_exp)))
00:03:52.005        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:52.005  ipsec_perf.c:1940:17: note: in expansion of macro ‘IMB_AES192_GCM_PRE’
00:03:52.005   1940 |                 IMB_AES192_GCM_PRE(mb_mgr, key, &gdata_key);
00:03:52.005        |                 ^~~~~~~~~~~~~~~~~~
00:03:52.005  ../lib/intel-ipsec-mb.h:1384:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’
00:03:52.005   1384 |         ((_mgr)->gcm192_pre((_key_in), (_key_exp)))
00:03:52.005        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:52.005  ipsec_perf.c:1940:17: note: in expansion of macro ‘IMB_AES192_GCM_PRE’
00:03:52.005   1940 |                 IMB_AES192_GCM_PRE(mb_mgr, key, &gdata_key);
00:03:52.005        |                 ^~~~~~~~~~~~~~~~~~
00:03:52.005  ../lib/intel-ipsec-mb.h:1386:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized]
00:03:52.005   1386 |         ((_mgr)->gcm256_pre((_key_in), (_key_exp)))
00:03:52.005        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:52.005  ipsec_perf.c:1944:17: note: in expansion of macro ‘IMB_AES256_GCM_PRE’
00:03:52.005   1944 |                 IMB_AES256_GCM_PRE(mb_mgr, key, &gdata_key);
00:03:52.005        |                 ^~~~~~~~~~~~~~~~~~
00:03:52.005  ../lib/intel-ipsec-mb.h:1386:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’
00:03:52.005   1386 |         ((_mgr)->gcm256_pre((_key_in), (_key_exp)))
00:03:52.005        |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:03:52.005  ipsec_perf.c:1944:17: note: in expansion of macro ‘IMB_AES256_GCM_PRE’
00:03:52.006   1944 |                 IMB_AES256_GCM_PRE(mb_mgr, key, &gdata_key);
00:03:52.006        |                 ^~~~~~~~~~~~~~~~~~
00:03:52.574  gcc -fPIE -z noexecstack -z relro -z now -pthread -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib ipsec_perf.o msr.o misc.o -lIPSec_MB -o ipsec_perf
00:03:52.574  make[1]: Leaving directory '/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/perf'
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@119 -- $ DPDK_DRIVERS+=("crypto")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@120 -- $ DPDK_DRIVERS+=("$intel_ipsec_mb_drv")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@121 -- $ DPDK_DRIVERS+=("crypto/qat")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@122 -- $ DPDK_DRIVERS+=("compress/qat")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@123 -- $ DPDK_DRIVERS+=("common/qat")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@125 -- $ ge 23.11.0 21.11.0
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 21.11.0
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>='
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@348 -- $ : 1
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@367 -- $ return 0
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@128 -- $ DPDK_DRIVERS+=("bus/auxiliary")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@129 -- $ DPDK_DRIVERS+=("common/mlx5")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@130 -- $ DPDK_DRIVERS+=("common/mlx5/linux")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@131 -- $ DPDK_DRIVERS+=("crypto/mlx5")
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@132 -- $ mlx5_libs_added=y
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@134 -- $ dpdk_cflags+=' -I/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib'
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@135 -- $ dpdk_ldflags+=' -L/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib'
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@136 -- $ export LD_LIBRARY_PATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@136 -- $ LD_LIBRARY_PATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]]
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/dpdk
00:03:52.574    18:24:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']'
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:03:52.574    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@367 -- $ return 1
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1
00:03:52.574  patching file config/rte_config.h
00:03:52.574  Hunk #1 succeeded at 60 (offset 1 line).
00:03:52.574   18:24:38 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:52.574   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@368 -- $ return 0
00:03:52.575   18:24:38 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1
00:03:52.575  patching file lib/pcapng/rte_pcapng.c
00:03:52.575   18:24:38 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>='
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@348 -- $ : 1
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:03:52.575    18:24:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:03:52.575   18:24:38 build_native_dpdk -- scripts/common.sh@368 -- $ return 1
00:03:52.575   18:24:38 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false
00:03:52.575    18:24:38 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s
00:03:52.575   18:24:38 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']'
00:03:52.575    18:24:38 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base crypto crypto/ipsec_mb crypto/qat compress/qat common/qat bus/auxiliary common/mlx5 common/mlx5/linux crypto/mlx5
00:03:52.575   18:24:38 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false '-Dc_link_args= -L/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib' '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow -I/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,crypto,crypto/ipsec_mb,crypto/qat,compress/qat,common/qat,bus/auxiliary,common/mlx5,common/mlx5/linux,crypto/mlx5,
00:03:55.870  The Meson build system
00:03:55.870  Version: 1.5.0
00:03:55.870  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/dpdk
00:03:55.870  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp
00:03:55.870  Build type: native build
00:03:55.870  Program cat found: YES (/usr/bin/cat)
00:03:55.870  Project name: DPDK
00:03:55.870  Project version: 23.11.0
00:03:55.870  C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:03:55.870  C linker for the host machine: gcc ld.bfd 2.40-14
00:03:55.870  Host machine cpu family: x86_64
00:03:55.870  Host machine cpu: x86_64
00:03:55.870  Message: ## Building in Developer Mode ##
00:03:55.870  Program pkg-config found: YES (/usr/bin/pkg-config)
00:03:55.870  Program check-symbols.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/buildtools/check-symbols.sh)
00:03:55.870  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh)
00:03:55.870  Program python3 found: YES (/usr/bin/python3)
00:03:55.870  Program cat found: YES (/usr/bin/cat)
00:03:55.870  config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead.
00:03:55.870  Compiler for C supports arguments -march=native: YES 
00:03:55.870  Checking for size of "void *" : 8 
00:03:55.870  Checking for size of "void *" : 8 (cached)
00:03:55.870  Library m found: YES
00:03:55.870  Library numa found: YES
00:03:55.870  Has header "numaif.h" : YES 
00:03:55.870  Library fdt found: NO
00:03:55.870  Library execinfo found: NO
00:03:55.870  Has header "execinfo.h" : YES 
00:03:55.870  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:03:55.870  Run-time dependency libarchive found: NO (tried pkgconfig)
00:03:55.870  Run-time dependency libbsd found: NO (tried pkgconfig)
00:03:55.871  Run-time dependency jansson found: NO (tried pkgconfig)
00:03:55.871  Run-time dependency openssl found: YES 3.1.1
00:03:55.871  Run-time dependency libpcap found: YES 1.10.4
00:03:55.871  Has header "pcap.h" with dependency libpcap: YES 
00:03:55.871  Compiler for C supports arguments -Wcast-qual: YES 
00:03:55.871  Compiler for C supports arguments -Wdeprecated: YES 
00:03:55.871  Compiler for C supports arguments -Wformat: YES 
00:03:55.871  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:03:55.871  Compiler for C supports arguments -Wformat-security: NO 
00:03:55.871  Compiler for C supports arguments -Wmissing-declarations: YES 
00:03:55.871  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:03:55.871  Compiler for C supports arguments -Wnested-externs: YES 
00:03:55.871  Compiler for C supports arguments -Wold-style-definition: YES 
00:03:55.871  Compiler for C supports arguments -Wpointer-arith: YES 
00:03:55.871  Compiler for C supports arguments -Wsign-compare: YES 
00:03:55.871  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:03:55.871  Compiler for C supports arguments -Wundef: YES 
00:03:55.871  Compiler for C supports arguments -Wwrite-strings: YES 
00:03:55.871  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:03:55.871  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:03:55.871  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:03:55.871  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:03:55.871  Program objdump found: YES (/usr/bin/objdump)
00:03:55.871  Compiler for C supports arguments -mavx512f: YES 
00:03:55.871  Checking if "AVX512 checking" compiles: YES 
00:03:55.871  Fetching value of define "__SSE4_2__" : 1 
00:03:55.871  Fetching value of define "__AES__" : 1 
00:03:55.871  Fetching value of define "__AVX__" : 1 
00:03:55.871  Fetching value of define "__AVX2__" : 1 
00:03:55.871  Fetching value of define "__AVX512BW__" : (undefined) 
00:03:55.871  Fetching value of define "__AVX512CD__" : (undefined) 
00:03:55.871  Fetching value of define "__AVX512DQ__" : (undefined) 
00:03:55.871  Fetching value of define "__AVX512F__" : (undefined) 
00:03:55.871  Fetching value of define "__AVX512VL__" : (undefined) 
00:03:55.871  Fetching value of define "__PCLMUL__" : 1 
00:03:55.871  Fetching value of define "__RDRND__" : 1 
00:03:55.871  Fetching value of define "__RDSEED__" : 1 
00:03:55.871  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:03:55.871  Fetching value of define "__znver1__" : (undefined) 
00:03:55.871  Fetching value of define "__znver2__" : (undefined) 
00:03:55.871  Fetching value of define "__znver3__" : (undefined) 
00:03:55.871  Fetching value of define "__znver4__" : (undefined) 
00:03:55.871  Compiler for C supports arguments -Wno-format-truncation: YES 
00:03:55.871  Message: lib/log: Defining dependency "log"
00:03:55.871  Message: lib/kvargs: Defining dependency "kvargs"
00:03:55.871  Message: lib/telemetry: Defining dependency "telemetry"
00:03:55.871  Checking for function "getentropy" : NO 
00:03:55.871  Message: lib/eal: Defining dependency "eal"
00:03:55.871  Message: lib/ring: Defining dependency "ring"
00:03:55.871  Message: lib/rcu: Defining dependency "rcu"
00:03:55.871  Message: lib/mempool: Defining dependency "mempool"
00:03:55.871  Message: lib/mbuf: Defining dependency "mbuf"
00:03:55.871  Fetching value of define "__PCLMUL__" : 1 (cached)
00:03:55.871  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:55.871  Compiler for C supports arguments -mpclmul: YES 
00:03:55.871  Compiler for C supports arguments -maes: YES 
00:03:55.871  Compiler for C supports arguments -mavx512f: YES (cached)
00:03:55.871  Compiler for C supports arguments -mavx512bw: YES 
00:03:55.871  Compiler for C supports arguments -mavx512dq: YES 
00:03:55.871  Compiler for C supports arguments -mavx512vl: YES 
00:03:55.871  Compiler for C supports arguments -mvpclmulqdq: YES 
00:03:55.871  Compiler for C supports arguments -mavx2: YES 
00:03:55.871  Compiler for C supports arguments -mavx: YES 
00:03:55.871  Message: lib/net: Defining dependency "net"
00:03:55.871  Message: lib/meter: Defining dependency "meter"
00:03:55.871  Message: lib/ethdev: Defining dependency "ethdev"
00:03:55.871  Message: lib/pci: Defining dependency "pci"
00:03:55.871  Message: lib/cmdline: Defining dependency "cmdline"
00:03:55.871  Message: lib/metrics: Defining dependency "metrics"
00:03:55.871  Message: lib/hash: Defining dependency "hash"
00:03:55.871  Message: lib/timer: Defining dependency "timer"
00:03:55.871  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:55.871  Fetching value of define "__AVX512VL__" : (undefined) (cached)
00:03:55.871  Fetching value of define "__AVX512CD__" : (undefined) (cached)
00:03:55.871  Fetching value of define "__AVX512BW__" : (undefined) (cached)
00:03:55.871  Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 
00:03:55.871  Message: lib/acl: Defining dependency "acl"
00:03:55.871  Message: lib/bbdev: Defining dependency "bbdev"
00:03:55.871  Message: lib/bitratestats: Defining dependency "bitratestats"
00:03:55.871  Run-time dependency libelf found: YES 0.191
00:03:55.871  Message: lib/bpf: Defining dependency "bpf"
00:03:55.871  Message: lib/cfgfile: Defining dependency "cfgfile"
00:03:55.871  Message: lib/compressdev: Defining dependency "compressdev"
00:03:55.871  Message: lib/cryptodev: Defining dependency "cryptodev"
00:03:55.871  Message: lib/distributor: Defining dependency "distributor"
00:03:55.871  Message: lib/dmadev: Defining dependency "dmadev"
00:03:55.871  Message: lib/efd: Defining dependency "efd"
00:03:55.871  Message: lib/eventdev: Defining dependency "eventdev"
00:03:55.871  Message: lib/dispatcher: Defining dependency "dispatcher"
00:03:55.871  Message: lib/gpudev: Defining dependency "gpudev"
00:03:55.871  Message: lib/gro: Defining dependency "gro"
00:03:55.871  Message: lib/gso: Defining dependency "gso"
00:03:55.871  Message: lib/ip_frag: Defining dependency "ip_frag"
00:03:55.871  Message: lib/jobstats: Defining dependency "jobstats"
00:03:55.871  Message: lib/latencystats: Defining dependency "latencystats"
00:03:55.871  Message: lib/lpm: Defining dependency "lpm"
00:03:55.871  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:55.871  Fetching value of define "__AVX512DQ__" : (undefined) (cached)
00:03:55.871  Fetching value of define "__AVX512IFMA__" : (undefined) 
00:03:55.871  Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 
00:03:55.871  Message: lib/member: Defining dependency "member"
00:03:55.871  Message: lib/pcapng: Defining dependency "pcapng"
00:03:55.871  Compiler for C supports arguments -Wno-cast-qual: YES 
00:03:55.871  Message: lib/power: Defining dependency "power"
00:03:55.871  Message: lib/rawdev: Defining dependency "rawdev"
00:03:55.871  Message: lib/regexdev: Defining dependency "regexdev"
00:03:55.871  Message: lib/mldev: Defining dependency "mldev"
00:03:55.871  Message: lib/rib: Defining dependency "rib"
00:03:55.871  Message: lib/reorder: Defining dependency "reorder"
00:03:55.871  Message: lib/sched: Defining dependency "sched"
00:03:55.871  Message: lib/security: Defining dependency "security"
00:03:55.871  Message: lib/stack: Defining dependency "stack"
00:03:55.871  Has header "linux/userfaultfd.h" : YES 
00:03:55.871  Has header "linux/vduse.h" : YES 
00:03:55.871  Message: lib/vhost: Defining dependency "vhost"
00:03:55.871  Message: lib/ipsec: Defining dependency "ipsec"
00:03:55.871  Message: lib/pdcp: Defining dependency "pdcp"
00:03:55.871  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:55.871  Fetching value of define "__AVX512DQ__" : (undefined) (cached)
00:03:55.871  Compiler for C supports arguments -mavx512f -mavx512dq: YES 
00:03:55.871  Compiler for C supports arguments -mavx512bw: YES (cached)
00:03:55.871  Message: lib/fib: Defining dependency "fib"
00:03:55.871  Message: lib/port: Defining dependency "port"
00:03:55.871  Message: lib/pdump: Defining dependency "pdump"
00:03:55.871  Message: lib/table: Defining dependency "table"
00:03:55.871  Message: lib/pipeline: Defining dependency "pipeline"
00:03:55.871  Message: lib/graph: Defining dependency "graph"
00:03:55.871  Message: lib/node: Defining dependency "node"
00:03:59.161  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:03:59.161  Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary"
00:03:59.161  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:03:59.161  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:03:59.161  Compiler for C supports arguments -std=c11: YES 
00:03:59.161  Compiler for C supports arguments -Wno-strict-prototypes: YES 
00:03:59.161  Compiler for C supports arguments -D_BSD_SOURCE: YES 
00:03:59.161  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 
00:03:59.161  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 
00:03:59.161  Run-time dependency libmlx5 found: YES 1.24.46.0
00:03:59.161  Run-time dependency libibverbs found: YES 1.14.46.0
00:03:59.161  Library mtcr_ul found: NO
00:03:59.161  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 
00:03:59.161  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_25000baseCR_Full_BIT" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 
00:03:59.162  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 
00:04:00.539  Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 
00:04:00.539  Configuring mlx5_autoconf.h using configuration
00:04:00.539  Message: drivers/common/mlx5: Defining dependency "common_mlx5"
00:04:00.539  Run-time dependency libcrypto found: YES 3.1.1
00:04:00.539  Library IPSec_MB found: YES
00:04:00.539  Fetching value of define "IMB_VERSION_STR" : "1.0.0" 
00:04:00.539  Message: drivers/common/qat: Defining dependency "common_qat"
00:04:00.539  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:04:00.539  Compiler for C supports arguments -Wno-sign-compare: YES 
00:04:00.539  Compiler for C supports arguments -Wno-unused-value: YES 
00:04:00.539  Compiler for C supports arguments -Wno-format: YES 
00:04:00.539  Compiler for C supports arguments -Wno-format-security: YES 
00:04:00.539  Compiler for C supports arguments -Wno-format-nonliteral: YES 
00:04:00.539  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:04:00.539  Compiler for C supports arguments -Wno-unused-but-set-variable: YES 
00:04:00.539  Compiler for C supports arguments -Wno-unused-parameter: YES 
00:04:00.539  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:04:00.539  Compiler for C supports arguments -mavx512f: YES (cached)
00:04:00.539  Compiler for C supports arguments -mavx512bw: YES (cached)
00:04:00.539  Compiler for C supports arguments -march=skylake-avx512: YES 
00:04:00.539  Message: drivers/net/i40e: Defining dependency "net_i40e"
00:04:00.539  Library IPSec_MB found: YES
00:04:00.539  Fetching value of define "IMB_VERSION_STR" : "1.0.0" (cached)
00:04:00.539  Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb"
00:04:00.539  Compiler for C supports arguments -std=c11: YES (cached)
00:04:00.539  Compiler for C supports arguments -Wno-strict-prototypes: YES (cached)
00:04:00.539  Compiler for C supports arguments -D_BSD_SOURCE: YES (cached)
00:04:00.539  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached)
00:04:00.539  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached)
00:04:00.539  Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5"
00:04:00.539  Has header "sys/epoll.h" : YES 
00:04:00.539  Program doxygen found: YES (/usr/local/bin/doxygen)
00:04:00.539  Configuring doxy-api-html.conf using configuration
00:04:00.539  Configuring doxy-api-man.conf using configuration
00:04:00.539  Program mandb found: YES (/usr/bin/mandb)
00:04:00.539  Program sphinx-build found: NO
00:04:00.539  Configuring rte_build_config.h using configuration
00:04:00.539  Message: 
00:04:00.539  =================
00:04:00.539  Applications Enabled
00:04:00.539  =================
00:04:00.539  
00:04:00.539  apps:
00:04:00.539  	dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 
00:04:00.539  	test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 
00:04:00.539  	test-pmd, test-regex, test-sad, test-security-perf, 
00:04:00.539  
00:04:00.539  Message: 
00:04:00.539  =================
00:04:00.539  Libraries Enabled
00:04:00.539  =================
00:04:00.539  
00:04:00.539  libs:
00:04:00.539  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:04:00.539  	net, meter, ethdev, pci, cmdline, metrics, hash, timer, 
00:04:00.539  	acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 
00:04:00.539  	dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 
00:04:00.539  	jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 
00:04:00.539  	mldev, rib, reorder, sched, security, stack, vhost, ipsec, 
00:04:00.539  	pdcp, fib, port, pdump, table, pipeline, graph, node, 
00:04:00.539  	
00:04:00.539  
00:04:00.539  Message: 
00:04:00.539  ===============
00:04:00.539  Drivers Enabled
00:04:00.539  ===============
00:04:00.539  
00:04:00.539  common:
00:04:00.539  	mlx5, qat, 
00:04:00.539  bus:
00:04:00.539  	auxiliary, pci, vdev, 
00:04:00.539  mempool:
00:04:00.539  	ring, 
00:04:00.539  dma:
00:04:00.539  	
00:04:00.539  net:
00:04:00.539  	i40e, 
00:04:00.539  raw:
00:04:00.539  	
00:04:00.539  crypto:
00:04:00.539  	ipsec_mb, mlx5, 
00:04:00.539  compress:
00:04:00.539  	
00:04:00.539  regex:
00:04:00.539  	
00:04:00.539  ml:
00:04:00.539  	
00:04:00.539  vdpa:
00:04:00.539  	
00:04:00.539  event:
00:04:00.539  	
00:04:00.539  baseband:
00:04:00.539  	
00:04:00.539  gpu:
00:04:00.539  	
00:04:00.539  
00:04:01.483  Message: 
00:04:01.483  =================
00:04:01.483  Content Skipped
00:04:01.483  =================
00:04:01.483  
00:04:01.483  apps:
00:04:01.483  	
00:04:01.483  libs:
00:04:01.483  	
00:04:01.483  drivers:
00:04:01.483  	common/cpt:	not in enabled drivers build config
00:04:01.483  	common/dpaax:	not in enabled drivers build config
00:04:01.483  	common/iavf:	not in enabled drivers build config
00:04:01.483  	common/idpf:	not in enabled drivers build config
00:04:01.483  	common/mvep:	not in enabled drivers build config
00:04:01.483  	common/octeontx:	not in enabled drivers build config
00:04:01.483  	bus/cdx:	not in enabled drivers build config
00:04:01.483  	bus/dpaa:	not in enabled drivers build config
00:04:01.483  	bus/fslmc:	not in enabled drivers build config
00:04:01.483  	bus/ifpga:	not in enabled drivers build config
00:04:01.483  	bus/platform:	not in enabled drivers build config
00:04:01.483  	bus/vmbus:	not in enabled drivers build config
00:04:01.483  	common/cnxk:	not in enabled drivers build config
00:04:01.483  	common/nfp:	not in enabled drivers build config
00:04:01.483  	common/sfc_efx:	not in enabled drivers build config
00:04:01.483  	mempool/bucket:	not in enabled drivers build config
00:04:01.483  	mempool/cnxk:	not in enabled drivers build config
00:04:01.483  	mempool/dpaa:	not in enabled drivers build config
00:04:01.483  	mempool/dpaa2:	not in enabled drivers build config
00:04:01.483  	mempool/octeontx:	not in enabled drivers build config
00:04:01.483  	mempool/stack:	not in enabled drivers build config
00:04:01.483  	dma/cnxk:	not in enabled drivers build config
00:04:01.483  	dma/dpaa:	not in enabled drivers build config
00:04:01.483  	dma/dpaa2:	not in enabled drivers build config
00:04:01.483  	dma/hisilicon:	not in enabled drivers build config
00:04:01.483  	dma/idxd:	not in enabled drivers build config
00:04:01.483  	dma/ioat:	not in enabled drivers build config
00:04:01.483  	dma/skeleton:	not in enabled drivers build config
00:04:01.483  	net/af_packet:	not in enabled drivers build config
00:04:01.483  	net/af_xdp:	not in enabled drivers build config
00:04:01.483  	net/ark:	not in enabled drivers build config
00:04:01.483  	net/atlantic:	not in enabled drivers build config
00:04:01.483  	net/avp:	not in enabled drivers build config
00:04:01.483  	net/axgbe:	not in enabled drivers build config
00:04:01.483  	net/bnx2x:	not in enabled drivers build config
00:04:01.483  	net/bnxt:	not in enabled drivers build config
00:04:01.483  	net/bonding:	not in enabled drivers build config
00:04:01.484  	net/cnxk:	not in enabled drivers build config
00:04:01.484  	net/cpfl:	not in enabled drivers build config
00:04:01.484  	net/cxgbe:	not in enabled drivers build config
00:04:01.484  	net/dpaa:	not in enabled drivers build config
00:04:01.484  	net/dpaa2:	not in enabled drivers build config
00:04:01.484  	net/e1000:	not in enabled drivers build config
00:04:01.484  	net/ena:	not in enabled drivers build config
00:04:01.484  	net/enetc:	not in enabled drivers build config
00:04:01.484  	net/enetfec:	not in enabled drivers build config
00:04:01.484  	net/enic:	not in enabled drivers build config
00:04:01.484  	net/failsafe:	not in enabled drivers build config
00:04:01.484  	net/fm10k:	not in enabled drivers build config
00:04:01.484  	net/gve:	not in enabled drivers build config
00:04:01.484  	net/hinic:	not in enabled drivers build config
00:04:01.484  	net/hns3:	not in enabled drivers build config
00:04:01.484  	net/iavf:	not in enabled drivers build config
00:04:01.484  	net/ice:	not in enabled drivers build config
00:04:01.484  	net/idpf:	not in enabled drivers build config
00:04:01.484  	net/igc:	not in enabled drivers build config
00:04:01.484  	net/ionic:	not in enabled drivers build config
00:04:01.484  	net/ipn3ke:	not in enabled drivers build config
00:04:01.484  	net/ixgbe:	not in enabled drivers build config
00:04:01.484  	net/mana:	not in enabled drivers build config
00:04:01.484  	net/memif:	not in enabled drivers build config
00:04:01.484  	net/mlx4:	not in enabled drivers build config
00:04:01.484  	net/mlx5:	not in enabled drivers build config
00:04:01.484  	net/mvneta:	not in enabled drivers build config
00:04:01.484  	net/mvpp2:	not in enabled drivers build config
00:04:01.484  	net/netvsc:	not in enabled drivers build config
00:04:01.484  	net/nfb:	not in enabled drivers build config
00:04:01.484  	net/nfp:	not in enabled drivers build config
00:04:01.484  	net/ngbe:	not in enabled drivers build config
00:04:01.484  	net/null:	not in enabled drivers build config
00:04:01.484  	net/octeontx:	not in enabled drivers build config
00:04:01.484  	net/octeon_ep:	not in enabled drivers build config
00:04:01.484  	net/pcap:	not in enabled drivers build config
00:04:01.484  	net/pfe:	not in enabled drivers build config
00:04:01.484  	net/qede:	not in enabled drivers build config
00:04:01.484  	net/ring:	not in enabled drivers build config
00:04:01.484  	net/sfc:	not in enabled drivers build config
00:04:01.484  	net/softnic:	not in enabled drivers build config
00:04:01.484  	net/tap:	not in enabled drivers build config
00:04:01.484  	net/thunderx:	not in enabled drivers build config
00:04:01.484  	net/txgbe:	not in enabled drivers build config
00:04:01.484  	net/vdev_netvsc:	not in enabled drivers build config
00:04:01.484  	net/vhost:	not in enabled drivers build config
00:04:01.484  	net/virtio:	not in enabled drivers build config
00:04:01.484  	net/vmxnet3:	not in enabled drivers build config
00:04:01.484  	raw/cnxk_bphy:	not in enabled drivers build config
00:04:01.484  	raw/cnxk_gpio:	not in enabled drivers build config
00:04:01.484  	raw/dpaa2_cmdif:	not in enabled drivers build config
00:04:01.484  	raw/ifpga:	not in enabled drivers build config
00:04:01.484  	raw/ntb:	not in enabled drivers build config
00:04:01.484  	raw/skeleton:	not in enabled drivers build config
00:04:01.484  	crypto/armv8:	not in enabled drivers build config
00:04:01.484  	crypto/bcmfs:	not in enabled drivers build config
00:04:01.484  	crypto/caam_jr:	not in enabled drivers build config
00:04:01.484  	crypto/ccp:	not in enabled drivers build config
00:04:01.484  	crypto/cnxk:	not in enabled drivers build config
00:04:01.484  	crypto/dpaa_sec:	not in enabled drivers build config
00:04:01.484  	crypto/dpaa2_sec:	not in enabled drivers build config
00:04:01.484  	crypto/mvsam:	not in enabled drivers build config
00:04:01.484  	crypto/nitrox:	not in enabled drivers build config
00:04:01.484  	crypto/null:	not in enabled drivers build config
00:04:01.484  	crypto/octeontx:	not in enabled drivers build config
00:04:01.484  	crypto/openssl:	not in enabled drivers build config
00:04:01.484  	crypto/scheduler:	not in enabled drivers build config
00:04:01.484  	crypto/uadk:	not in enabled drivers build config
00:04:01.484  	crypto/virtio:	not in enabled drivers build config
00:04:01.484  	compress/isal:	not in enabled drivers build config
00:04:01.484  	compress/mlx5:	not in enabled drivers build config
00:04:01.484  	compress/octeontx:	not in enabled drivers build config
00:04:01.484  	compress/zlib:	not in enabled drivers build config
00:04:01.484  	regex/mlx5:	not in enabled drivers build config
00:04:01.484  	regex/cn9k:	not in enabled drivers build config
00:04:01.484  	ml/cnxk:	not in enabled drivers build config
00:04:01.484  	vdpa/ifc:	not in enabled drivers build config
00:04:01.484  	vdpa/mlx5:	not in enabled drivers build config
00:04:01.484  	vdpa/nfp:	not in enabled drivers build config
00:04:01.484  	vdpa/sfc:	not in enabled drivers build config
00:04:01.484  	event/cnxk:	not in enabled drivers build config
00:04:01.484  	event/dlb2:	not in enabled drivers build config
00:04:01.484  	event/dpaa:	not in enabled drivers build config
00:04:01.484  	event/dpaa2:	not in enabled drivers build config
00:04:01.484  	event/dsw:	not in enabled drivers build config
00:04:01.484  	event/opdl:	not in enabled drivers build config
00:04:01.484  	event/skeleton:	not in enabled drivers build config
00:04:01.484  	event/sw:	not in enabled drivers build config
00:04:01.484  	event/octeontx:	not in enabled drivers build config
00:04:01.484  	baseband/acc:	not in enabled drivers build config
00:04:01.484  	baseband/fpga_5gnr_fec:	not in enabled drivers build config
00:04:01.484  	baseband/fpga_lte_fec:	not in enabled drivers build config
00:04:01.484  	baseband/la12xx:	not in enabled drivers build config
00:04:01.484  	baseband/null:	not in enabled drivers build config
00:04:01.484  	baseband/turbo_sw:	not in enabled drivers build config
00:04:01.484  	gpu/cuda:	not in enabled drivers build config
00:04:01.484  	
00:04:01.484  
00:04:01.484  Build targets in project: 242
00:04:01.484  
00:04:01.484  DPDK 23.11.0
00:04:01.484  
00:04:01.484    User defined options
00:04:01.484      libdir        : lib
00:04:01.484      prefix        : /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build
00:04:01.484      c_args        : -fPIC -g -fcommon -Werror -Wno-stringop-overflow -I/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib
00:04:01.484      c_link_args   :  -L/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/intel-ipsec-mb/lib
00:04:01.484      enable_docs   : false
00:04:01.484      enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,crypto,crypto/ipsec_mb,crypto/qat,compress/qat,common/qat,bus/auxiliary,common/mlx5,common/mlx5/linux,crypto/mlx5,
00:04:01.484      enable_kmods  : false
00:04:01.484      machine       : native
00:04:01.484      tests         : false
00:04:01.484  
00:04:01.484  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:04:01.484  WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
00:04:01.484   18:24:47 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp -j88
00:04:01.484  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp'
00:04:01.484  [1/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:04:01.484  [2/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:04:01.745  [3/797] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:04:01.745  [4/797] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:04:01.745  [5/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:04:01.745  [6/797] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:04:01.745  [7/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:04:01.745  [8/797] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:04:01.745  [9/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:04:01.745  [10/797] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:04:01.745  [11/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:04:01.745  [12/797] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:04:01.745  [13/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:04:01.745  [14/797] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:04:01.745  [15/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:04:01.745  [16/797] Linking static target lib/librte_kvargs.a
00:04:01.745  [17/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:04:01.745  [18/797] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:04:01.745  [19/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:04:01.745  [20/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:04:01.745  [21/797] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:04:01.745  [22/797] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:04:01.745  [23/797] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:04:02.004  [24/797] Compiling C object lib/librte_log.a.p/log_log.c.o
00:04:02.004  [25/797] Linking static target lib/librte_log.a
00:04:02.004  [26/797] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:04:02.004  [27/797] Linking static target lib/librte_pci.a
00:04:02.004  [28/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:04:02.004  [29/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:04:02.270  [30/797] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:04:02.270  [31/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:04:02.270  [32/797] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:04:02.270  [33/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:04:02.530  [34/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:04:02.530  [35/797] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:04:02.530  [36/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:04:02.530  [37/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:04:02.530  [38/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:04:02.530  [39/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:04:02.530  [40/797] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:04:02.530  [41/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:04:02.530  [42/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:04:02.530  [43/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:04:02.530  [44/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:04:02.530  [45/797] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:04:02.530  [46/797] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:04:02.530  [47/797] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:04:02.530  [48/797] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:04:02.530  [49/797] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:04:02.530  [50/797] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:04:02.530  [51/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:04:02.530  [52/797] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:04:02.530  [53/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:04:02.530  [54/797] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:04:02.530  [55/797] Linking static target lib/net/libnet_crc_avx512_lib.a
00:04:02.530  [56/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:04:02.530  [57/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:04:02.530  [58/797] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:04:02.530  [59/797] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:04:02.530  [60/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:04:02.530  [61/797] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:04:02.530  [62/797] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:04:02.793  [63/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:04:02.793  [64/797] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:04:02.793  [65/797] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:04:02.793  [66/797] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:04:02.793  [67/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:04:02.793  [68/797] Linking static target lib/librte_meter.a
00:04:02.793  [69/797] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:04:02.793  [70/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:04:02.793  [71/797] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:04:02.793  [72/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:04:02.793  [73/797] Linking static target lib/librte_ring.a
00:04:02.793  [74/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:04:02.793  [75/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:04:02.793  [76/797] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:04:02.793  [77/797] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:04:02.793  [78/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:04:02.793  [79/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:04:02.793  [80/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:04:02.793  [81/797] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:04:02.793  [82/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:04:02.793  [83/797] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:04:02.793  [84/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:04:02.793  [85/797] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:04:02.793  [86/797] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:04:02.793  [87/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:04:02.793  [88/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:04:02.793  [89/797] Linking target lib/librte_log.so.24.0
00:04:02.793  [90/797] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:04:02.793  [91/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:04:02.794  [92/797] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:04:02.794  [93/797] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:04:02.794  [94/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:04:02.794  [95/797] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:04:02.794  [96/797] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:04:02.794  [97/797] Linking static target lib/librte_net.a
00:04:02.794  [98/797] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:04:03.060  [99/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:04:03.060  [100/797] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:04:03.060  [101/797] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:04:03.060  [102/797] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:04:03.060  [103/797] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:04:03.060  [104/797] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:04:03.060  [105/797] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:04:03.060  [106/797] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:04:03.060  [107/797] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:04:03.061  [108/797] Linking target lib/librte_kvargs.so.24.0
00:04:03.061  [109/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:04:03.061  [110/797] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:04:03.061  [111/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:04:03.061  [112/797] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:04:03.061  [113/797] Linking static target lib/librte_cmdline.a
00:04:03.061  [114/797] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o
00:04:03.061  [115/797] Linking static target lib/librte_cfgfile.a
00:04:03.061  [116/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:04:03.061  [117/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:04:03.320  [118/797] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:04:03.320  [119/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:04:03.320  [120/797] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:04:03.320  [121/797] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:04:03.320  [122/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:04:03.320  [123/797] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:04:03.320  [124/797] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:04:03.320  [125/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:04:03.320  [126/797] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:04:03.320  [127/797] Linking static target lib/librte_mempool.a
00:04:03.320  [128/797] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:04:03.581  [129/797] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:04:03.581  [130/797] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o
00:04:03.581  [131/797] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o
00:04:03.581  [132/797] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:04:03.581  [133/797] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:04:03.581  [134/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o
00:04:03.581  [135/797] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o
00:04:03.581  [136/797] Linking static target lib/librte_metrics.a
00:04:03.581  [137/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o
00:04:03.581  [138/797] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:04:03.581  [139/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o
00:04:03.581  [140/797] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o
00:04:03.581  [141/797] Linking static target lib/librte_bitratestats.a
00:04:03.581  [142/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o
00:04:03.581  [143/797] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o
00:04:03.581  [144/797] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:04:03.581  [145/797] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:04:03.845  [146/797] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:04:03.845  [147/797] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:04:03.845  [148/797] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output)
00:04:03.845  [149/797] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o
00:04:03.845  [150/797] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o
00:04:03.845  [151/797] Linking static target lib/librte_eal.a
00:04:03.845  [152/797] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:04:03.845  [153/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o
00:04:03.845  [154/797] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:04:03.845  [155/797] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:04:03.845  [156/797] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:04:03.845  [157/797] Linking static target lib/librte_rcu.a
00:04:03.845  [158/797] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:04:03.845  [159/797] Linking static target lib/librte_telemetry.a
00:04:03.845  [160/797] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:04:03.845  [161/797] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o
00:04:03.845  [162/797] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o
00:04:03.845  [163/797] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o
00:04:03.845  [164/797] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:04:04.106  [165/797] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.106  [166/797] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:04:04.106  [167/797] Linking static target lib/librte_compressdev.a
00:04:04.106  [168/797] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o
00:04:04.106  [169/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o
00:04:04.106  [170/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o
00:04:04.106  [171/797] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:04:04.106  [172/797] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:04:04.106  [173/797] Linking static target lib/librte_timer.a
00:04:04.106  [174/797] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o
00:04:04.106  [175/797] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o
00:04:04.106  [176/797] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o
00:04:04.106  [177/797] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o
00:04:04.106  [178/797] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:04:04.106  [179/797] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o
00:04:04.106  [180/797] Linking static target lib/librte_mbuf.a
00:04:04.106  [181/797] Linking static target lib/librte_dispatcher.a
00:04:04.106  [182/797] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o
00:04:04.106  [183/797] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o
00:04:04.106  [184/797] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o
00:04:04.106  [185/797] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.106  [186/797] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o
00:04:04.369  [187/797] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o
00:04:04.369  [188/797] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o
00:04:04.369  [189/797] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o
00:04:04.369  [190/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o
00:04:04.369  [191/797] Linking static target lib/librte_jobstats.a
00:04:04.369  [192/797] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:04:04.369  [193/797] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o
00:04:04.369  [194/797] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o
00:04:04.369  [195/797] Linking static target lib/librte_bbdev.a
00:04:04.369  [196/797] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o
00:04:04.369  [197/797] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.369  [198/797] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o
00:04:04.369  [199/797] Linking static target lib/librte_gpudev.a
00:04:04.369  [200/797] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:04:04.369  [201/797] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.369  [202/797] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o
00:04:04.369  [203/797] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:04:04.369  [204/797] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o
00:04:04.369  [205/797] Linking static target lib/librte_dmadev.a
00:04:04.369  [206/797] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o
00:04:04.369  [207/797] Linking static target lib/librte_gro.a
00:04:04.369  [208/797] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.631  [209/797] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o
00:04:04.631  [210/797] Compiling C object lib/librte_member.a.p/member_rte_member.c.o
00:04:04.631  [211/797] Linking static target lib/librte_distributor.a
00:04:04.631  [212/797] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o
00:04:04.631  [213/797] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o
00:04:04.631  [214/797] Linking static target lib/member/libsketch_avx512_tmp.a
00:04:04.631  [215/797] Linking static target lib/librte_gso.a
00:04:04.631  [216/797] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o
00:04:04.631  [217/797] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:04:04.631  [218/797] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o
00:04:04.631  [219/797] Linking static target lib/librte_latencystats.a
00:04:04.631  [220/797] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.631  [221/797] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o
00:04:04.631  [222/797] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o
00:04:04.631  [223/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o
00:04:04.631  [224/797] Linking target lib/librte_telemetry.so.24.0
00:04:04.631  [225/797] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.631  [226/797] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o
00:04:04.631  [227/797] Linking static target lib/librte_ip_frag.a
00:04:04.631  [228/797] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:04:04.631  [229/797] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o
00:04:04.631  [230/797] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o
00:04:04.631  [231/797] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:04:04.899  [232/797] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o
00:04:04.899  [233/797] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o
00:04:04.899  [234/797] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.899  [235/797] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.899  [236/797] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:04:04.899  [237/797] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.899  [238/797] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.899  [239/797] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:04:04.899  [240/797] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:04:04.899  [241/797] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o
00:04:04.899  [242/797] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.899  [243/797] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:04:04.899  [244/797] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.899  [245/797] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o
00:04:04.899  [246/797] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output)
00:04:04.899  [247/797] Linking static target lib/librte_bpf.a
00:04:04.899  [248/797] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:04:04.899  [249/797] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o
00:04:04.899  [250/797] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o
00:04:05.164  [251/797] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o
00:04:05.164  [252/797] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o
00:04:05.164  [253/797] Linking static target lib/librte_regexdev.a
00:04:05.164  [254/797] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.164  [255/797] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.164  [256/797] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:04:05.164  [257/797] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o
00:04:05.164  [258/797] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o
00:04:05.164  [259/797] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o
00:04:05.164  [260/797] Linking static target lib/librte_rawdev.a
00:04:05.164  [261/797] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.164  [262/797] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o
00:04:05.164  [263/797] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:04:05.164  [264/797] Linking static target lib/librte_stack.a
00:04:05.164  [265/797] Linking static target lib/librte_power.a
00:04:05.164  [266/797] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o
00:04:05.164  [267/797] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o
00:04:05.164  [268/797] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o
00:04:05.164  [269/797] Linking static target lib/librte_mldev.a
00:04:05.164  [270/797] Linking static target lib/librte_pcapng.a
00:04:05.164  [271/797] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.423  [272/797] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o
00:04:05.423  [273/797] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
00:04:05.423  [274/797] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o
00:04:05.423  [275/797] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o
00:04:05.423  [276/797] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.423  [277/797] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:04:05.423  [278/797] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o
00:04:05.423  [279/797] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:04:05.423  [280/797] Linking static target lib/librte_efd.a
00:04:05.423  [281/797] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o
00:04:05.423  [282/797] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:04:05.423  [283/797] Linking static target lib/librte_reorder.a
00:04:05.423  [284/797] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o
00:04:05.423  [285/797] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o
00:04:05.423  [286/797] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.423  [287/797] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:04:05.685  [288/797] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o
00:04:05.685  [289/797] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:04:05.685  [290/797] Linking static target lib/librte_security.a
00:04:05.685  [291/797] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o
00:04:05.685  [292/797] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:04:05.685  [293/797] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o
00:04:05.685  [294/797] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o
00:04:05.685  [295/797] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.685  [296/797] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o
00:04:05.685  [297/797] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o
00:04:05.685  [298/797] Linking static target lib/librte_lpm.a
00:04:05.685  [299/797] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:04:05.685  [300/797] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.685  [301/797] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o
00:04:05.685  [302/797] Linking static target lib/fib/libtrie_avx512_tmp.a
00:04:05.685  [303/797] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o
00:04:05.685  [304/797] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
00:04:05.685  [305/797] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o
00:04:05.685  [306/797] Linking static target lib/fib/libdir24_8_avx512_tmp.a
00:04:05.685  [307/797] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o
00:04:05.947  [308/797] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:04:05.947  [309/797] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o
00:04:05.947  [310/797] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.947  [311/797] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.947  [312/797] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o
00:04:05.947  [313/797] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o
00:04:05.947  [314/797] Compiling C object lib/librte_node.a.p/node_null.c.o
00:04:05.947  [315/797] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:04:05.947  [316/797] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o
00:04:06.206  [317/797] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o
00:04:06.206  [318/797] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o
00:04:06.206  [319/797] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o
00:04:06.206  [320/797] Linking static target lib/librte_rib.a
00:04:06.206  [321/797] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:04:06.206  [322/797] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o
00:04:06.206  [323/797] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o
00:04:06.206  [324/797] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o
00:04:06.206  [325/797] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:06.206  [326/797] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:04:06.206  [327/797] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output)
00:04:06.206  [328/797] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:04:06.206  [329/797] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o
00:04:06.206  [330/797] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o
00:04:06.206  [331/797] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o
00:04:06.206  [332/797] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:04:06.471  [333/797] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o
00:04:06.471  [334/797] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o
00:04:06.471  [335/797] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o
00:04:06.471  [336/797] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o
00:04:06.471  [337/797] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o
00:04:06.471  [338/797] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o
00:04:06.471  [339/797] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o
00:04:06.471  [340/797] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o
00:04:06.471  [341/797] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o
00:04:06.471  [342/797] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o
00:04:06.471  [343/797] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o
00:04:06.471  [344/797] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o
00:04:06.471  [345/797] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o
00:04:06.734  [346/797] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o
00:04:06.734  [347/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o
00:04:06.734  [348/797] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o
00:04:06.734  [349/797] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o
00:04:06.734  [350/797] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o
00:04:06.734  [351/797] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o
00:04:06.734  [352/797] Compiling C object lib/librte_fib.a.p/fib_trie.c.o
00:04:06.734  [353/797] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o
00:04:06.734  [354/797] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o
00:04:06.734  [355/797] Compiling C object lib/librte_graph.a.p/graph_node.c.o
00:04:06.734  [356/797] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o
00:04:06.734  [357/797] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o
00:04:06.734  [358/797] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output)
00:04:06.993  [359/797] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o
00:04:06.993  [360/797] Linking static target lib/librte_fib.a
00:04:06.993  [361/797] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:04:06.993  [362/797] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o
00:04:06.993  [363/797] Compiling C object lib/librte_node.a.p/node_log.c.o
00:04:06.993  [364/797] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o
00:04:06.993  [365/797] Compiling C object lib/librte_graph.a.p/graph_graph.c.o
00:04:06.993  [366/797] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o
00:04:06.993  [367/797] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o
00:04:06.993  [368/797] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:04:06.993  [369/797] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o
00:04:06.993  [370/797] Linking static target lib/librte_cryptodev.a
00:04:06.993  [371/797] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:04:06.993  [372/797] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o
00:04:06.993  [373/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o
00:04:06.993  [374/797] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o
00:04:06.993  [375/797] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o
00:04:06.993  [376/797] Linking static target drivers/libtmp_rte_bus_auxiliary.a
00:04:06.993  [377/797] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o
00:04:06.993  [378/797] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o
00:04:07.256  [379/797] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o
00:04:07.256  [380/797] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o
00:04:07.256  [381/797] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o
00:04:07.256  [382/797] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:04:07.256  [383/797] Linking static target lib/librte_pdump.a
00:04:07.256  [384/797] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o
00:04:07.256  [385/797] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o
00:04:07.256  [386/797] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:07.515  [387/797] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o
00:04:07.515  [388/797] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o
00:04:07.515  [389/797] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output)
00:04:07.515  [390/797] Linking static target lib/librte_sched.a
00:04:07.515  [391/797] Linking static target lib/librte_graph.a
00:04:07.515  [392/797] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command
00:04:07.515  [393/797] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o
00:04:07.515  [394/797] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:04:07.515  [395/797] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:04:07.515  [396/797] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
00:04:07.515  [397/797] Linking static target drivers/librte_bus_auxiliary.a
00:04:07.515  [398/797] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:04:07.515  [399/797] Compiling C object drivers/librte_bus_auxiliary.so.24.0.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:04:07.515  [400/797] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o
00:04:07.515  [401/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o
00:04:07.515  [402/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o
00:04:07.515  [403/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o
00:04:07.515  [404/797] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o
00:04:07.515  [405/797] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o
00:04:07.515  [406/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o
00:04:07.515  [407/797] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o
00:04:07.515  [408/797] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o
00:04:07.515  [409/797] Linking static target lib/librte_member.a
00:04:07.515  [410/797] Linking static target lib/acl/libavx2_tmp.a
00:04:07.515  [411/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o
00:04:07.515  [412/797] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:04:07.515  [413/797] Linking static target lib/librte_table.a
00:04:07.515  [414/797] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:04:07.515  [415/797] Linking static target drivers/libtmp_rte_bus_vdev.a
00:04:07.515  [416/797] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
00:04:07.515  [417/797] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o
00:04:07.515  [418/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o
00:04:07.515  [419/797] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o
00:04:07.515  [420/797] Linking static target lib/librte_ipsec.a
00:04:07.775  [421/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o
00:04:07.775  [422/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o
00:04:07.775  [423/797] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output)
00:04:07.775  [424/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o
00:04:07.775  [425/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o
00:04:07.775  [426/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o
00:04:07.775  [427/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o
00:04:07.775  [428/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o
00:04:07.775  [429/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o
00:04:07.775  [430/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o
00:04:07.775  [431/797] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:04:07.775  [432/797] Linking static target drivers/libtmp_rte_bus_pci.a
00:04:07.775  [433/797] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output)
00:04:07.775  [434/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o
00:04:07.775  [435/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o
00:04:07.775  [436/797] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:04:08.038  [437/797] Linking static target lib/librte_hash.a
00:04:08.038  [438/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o
00:04:08.038  [439/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o
00:04:08.038  [440/797] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:04:08.038  [441/797] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o
00:04:08.038  [442/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o
00:04:08.038  [443/797] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:04:08.038  [444/797] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:04:08.038  [445/797] Linking static target drivers/librte_bus_vdev.a
00:04:08.038  [446/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o
00:04:08.038  [447/797] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.038  [448/797] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o
00:04:08.038  [449/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o
00:04:08.038  [450/797] Linking static target lib/librte_eventdev.a
00:04:08.038  [451/797] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o
00:04:08.038  [452/797] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o
00:04:08.038  [453/797] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.038  [454/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o
00:04:08.038  [455/797] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o
00:04:08.301  [456/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o
00:04:08.301  [457/797] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.301  [458/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o
00:04:08.301  [459/797] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o
00:04:08.301  [460/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o
00:04:08.301  [461/797] Linking static target lib/acl/libavx512_tmp.a
00:04:08.301  [462/797] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o
00:04:08.301  [463/797] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o
00:04:08.301  [464/797] Linking static target lib/librte_acl.a
00:04:08.301  [465/797] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:04:08.301  [466/797] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o
00:04:08.301  [467/797] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:04:08.301  [468/797] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:04:08.301  [469/797] Linking static target drivers/librte_bus_pci.a
00:04:08.301  [470/797] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o
00:04:08.301  [471/797] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o
00:04:08.301  [472/797] Linking static target lib/librte_pdcp.a
00:04:08.301  [473/797] Linking static target lib/librte_node.a
00:04:08.301  [474/797] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.563  [475/797] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.563  [476/797] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o
00:04:08.563  [477/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o
00:04:08.563  [478/797] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o
00:04:08.563  [479/797] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o
00:04:08.830  [480/797] Compiling C object app/dpdk-graph.p/graph_cli.c.o
00:04:08.830  [481/797] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.830  [482/797] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o
00:04:08.830  [483/797] Linking static target lib/librte_port.a
00:04:08.830  [484/797] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.830  [485/797] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.830  [486/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o
00:04:08.830  [487/797] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o
00:04:08.830  [488/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o
00:04:08.830  [489/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o
00:04:08.830  [490/797] Compiling C object app/dpdk-graph.p/graph_conn.c.o
00:04:08.830  [491/797] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o
00:04:08.830  [492/797] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.830  [493/797] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o
00:04:08.830  [494/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o
00:04:08.830  [495/797] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output)
00:04:08.830  [496/797] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o
00:04:08.830  [497/797] Compiling C object app/dpdk-graph.p/graph_mempool.c.o
00:04:08.830  [498/797] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o
00:04:08.830  [499/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o
00:04:09.092  [500/797] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:09.092  [501/797] Compiling C object app/dpdk-graph.p/graph_utils.c.o
00:04:09.092  [502/797] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o
00:04:09.092  [503/797] Compiling C object app/dpdk-graph.p/graph_main.c.o
00:04:09.092  [504/797] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o
00:04:09.092  [505/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o
00:04:09.092  [506/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o
00:04:09.092  [507/797] Compiling C object app/dpdk-graph.p/graph_graph.c.o
00:04:09.092  [508/797] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:04:09.092  [509/797] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o
00:04:09.092  [510/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o
00:04:09.092  [511/797] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:04:09.092  [512/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o
00:04:09.092  [513/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o
00:04:09.092  [514/797] Compiling C object app/dpdk-graph.p/graph_neigh.c.o
00:04:09.092  [515/797] Linking static target drivers/libtmp_rte_mempool_ring.a
00:04:09.092  [516/797] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o
00:04:09.092  [517/797] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o
00:04:09.359  [518/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o
00:04:09.359  [519/797] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o
00:04:09.359  [520/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o
00:04:09.359  [521/797] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o
00:04:09.359  [522/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o
00:04:09.359  [523/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o
00:04:09.359  [524/797] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o
00:04:09.623  [525/797] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:04:09.623  [526/797] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output)
00:04:09.623  [527/797] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:04:09.623  [528/797] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o
00:04:09.623  [529/797] Linking static target drivers/librte_mempool_ring.a
00:04:09.623  [530/797] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:04:09.623  [531/797] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o
00:04:09.623  [532/797] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:04:09.623  [533/797] Linking static target lib/librte_ethdev.a
00:04:09.623  [534/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o
00:04:09.885  [535/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o
00:04:09.885  [536/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o
00:04:09.885  [537/797] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o
00:04:09.885  [538/797] Linking static target drivers/libtmp_rte_crypto_mlx5.a
00:04:09.885  [539/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o
00:04:09.885  [540/797] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o
00:04:09.885  [541/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o
00:04:09.885  [542/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o
00:04:09.885  [543/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o
00:04:10.149  [544/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o
00:04:10.149  [545/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o
00:04:10.149  [546/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o
00:04:10.149  [547/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o
00:04:10.149  [548/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o
00:04:10.149  [549/797] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o
00:04:10.149  [550/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o
00:04:10.149  [551/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o
00:04:10.149  [552/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o
00:04:10.149  [553/797] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o
00:04:10.149  [554/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o
00:04:10.149  [555/797] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o
00:04:10.149  [556/797] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o
00:04:10.149  [557/797] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o
00:04:10.149  [558/797] Compiling C object app/dpdk-pdump.p/pdump_main.c.o
00:04:10.149  [559/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o
00:04:10.149  [560/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o
00:04:10.149  [561/797] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command
00:04:10.149  [562/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o
00:04:10.149  [563/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o
00:04:10.411  [564/797] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o
00:04:10.411  [565/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o
00:04:10.411  [566/797] Compiling C object drivers/librte_crypto_mlx5.so.24.0.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:04:10.411  [567/797] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:04:10.411  [568/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o
00:04:10.411  [569/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o
00:04:10.411  [570/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o
00:04:10.411  [571/797] Linking static target drivers/librte_crypto_mlx5.a
00:04:10.411  [572/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o
00:04:10.411  [573/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o
00:04:10.411  [574/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o
00:04:10.411  [575/797] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o
00:04:10.411  [576/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o
00:04:10.411  [577/797] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o
00:04:10.411  [578/797] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o
00:04:10.411  [579/797] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o
00:04:10.411  [580/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o
00:04:10.411  [581/797] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o
00:04:10.670  [582/797] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o
00:04:10.670  [583/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o
00:04:10.670  [584/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o
00:04:10.670  [585/797] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o
00:04:10.670  [586/797] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o
00:04:10.670  [587/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o
00:04:10.670  [588/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o
00:04:10.670  [589/797] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o
00:04:10.670  [590/797] Linking static target drivers/net/i40e/libi40e_avx512_lib.a
00:04:10.670  [591/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o
00:04:10.670  [592/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o
00:04:10.670  [593/797] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o
00:04:10.670  [594/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o
00:04:10.670  [595/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o
00:04:10.670  [596/797] Linking static target drivers/libtmp_rte_common_mlx5.a
00:04:10.670  [597/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o
00:04:10.670  [598/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o
00:04:10.670  [599/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o
00:04:10.670  [600/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o
00:04:10.930  [601/797] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o
00:04:10.930  [602/797] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o
00:04:10.930  [603/797] Linking static target drivers/net/i40e/libi40e_avx2_lib.a
00:04:10.930  [604/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o
00:04:10.930  [605/797] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o
00:04:10.930  [606/797] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o
00:04:10.930  [607/797] Linking static target drivers/net/i40e/base/libi40e_base.a
00:04:10.930  [608/797] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o
00:04:10.930  [609/797] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a
00:04:10.930  [610/797] Generating drivers/rte_common_mlx5.pmd.c with a custom command
00:04:10.930  [611/797] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:04:10.930  [612/797] Compiling C object drivers/librte_common_mlx5.so.24.0.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:04:10.930  [613/797] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o
00:04:10.930  [614/797] Linking static target drivers/librte_common_mlx5.a
00:04:10.930  [615/797] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o
00:04:10.930  [616/797] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:10.930  [617/797] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o
00:04:10.930  [618/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o
00:04:10.930  [619/797] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o
00:04:10.930  [620/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o
00:04:10.930  [621/797] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o
00:04:11.189  [622/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o
00:04:11.189  [623/797] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command
00:04:11.189  [624/797] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o
00:04:11.189  [625/797] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.0.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:04:11.189  [626/797] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:04:11.189  [627/797] Linking static target drivers/librte_crypto_ipsec_mb.a
00:04:11.189  [628/797] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o
00:04:11.189  [629/797] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o
00:04:11.189  [630/797] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o
00:04:11.189  [631/797] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o
00:04:11.189  [632/797] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o
00:04:11.189  [633/797] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o
00:04:11.189  [634/797] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o
00:04:11.189  [635/797] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o
00:04:11.447  [636/797] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o
00:04:11.447  [637/797] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o
00:04:11.447  [638/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o
00:04:11.447  [639/797] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o
00:04:11.447  [640/797] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o
00:04:11.447  [641/797] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o
00:04:11.447  [642/797] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o
00:04:11.447  [643/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o
00:04:11.706  [644/797] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o
00:04:11.706  [645/797] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o
00:04:11.706  [646/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o
00:04:11.964  [647/797] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o
00:04:11.964  [648/797] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o
00:04:11.964  [649/797] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o
00:04:12.223  [650/797] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o
00:04:12.223  [651/797] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o
00:04:12.482  [652/797] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:04:12.482  [653/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o
00:04:12.482  [654/797] Linking target lib/librte_eal.so.24.0
00:04:12.482  [655/797] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:04:12.741  [656/797] Linking target lib/librte_ring.so.24.0
00:04:12.741  [657/797] Linking target lib/librte_timer.so.24.0
00:04:12.741  [658/797] Linking target lib/librte_meter.so.24.0
00:04:12.741  [659/797] Linking target lib/librte_stack.so.24.0
00:04:12.741  [660/797] Linking target lib/librte_cfgfile.so.24.0
00:04:12.741  [661/797] Linking target lib/librte_pci.so.24.0
00:04:12.741  [662/797] Linking target lib/librte_dmadev.so.24.0
00:04:12.741  [663/797] Linking target lib/librte_jobstats.so.24.0
00:04:12.741  [664/797] Linking target lib/librte_rawdev.so.24.0
00:04:12.741  [665/797] Linking target drivers/librte_bus_auxiliary.so.24.0
00:04:12.741  [666/797] Linking target drivers/librte_bus_vdev.so.24.0
00:04:12.741  [667/797] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o
00:04:12.741  [668/797] Linking target lib/librte_acl.so.24.0
00:04:12.741  [669/797] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o
00:04:12.741  [670/797] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:04:12.741  [671/797] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:04:12.741  [672/797] Generating symbol file drivers/librte_bus_auxiliary.so.24.0.p/librte_bus_auxiliary.so.24.0.symbols
00:04:12.741  [673/797] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:04:12.741  [674/797] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:04:12.741  [675/797] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols
00:04:12.741  [676/797] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:04:12.741  [677/797] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols
00:04:12.741  [678/797] Linking target lib/librte_rcu.so.24.0
00:04:12.741  [679/797] Linking target lib/librte_mempool.so.24.0
00:04:12.741  [680/797] Linking target drivers/librte_bus_pci.so.24.0
00:04:12.999  [681/797] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:04:12.999  [682/797] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols
00:04:12.999  [683/797] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:04:12.999  [684/797] Linking target drivers/librte_mempool_ring.so.24.0
00:04:12.999  [685/797] Linking target lib/librte_rib.so.24.0
00:04:12.999  [686/797] Linking target lib/librte_mbuf.so.24.0
00:04:12.999  [687/797] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o
00:04:12.999  [688/797] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:04:12.999  [689/797] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols
00:04:12.999  [690/797] Linking target lib/librte_distributor.so.24.0
00:04:12.999  [691/797] Linking target lib/librte_bbdev.so.24.0
00:04:12.999  [692/797] Linking target lib/librte_compressdev.so.24.0
00:04:12.999  [693/797] Linking target lib/librte_net.so.24.0
00:04:12.999  [694/797] Linking target lib/librte_gpudev.so.24.0
00:04:12.999  [695/797] Linking target lib/librte_reorder.so.24.0
00:04:12.999  [696/797] Linking target lib/librte_regexdev.so.24.0
00:04:12.999  [697/797] Linking target lib/librte_sched.so.24.0
00:04:12.999  [698/797] Linking target lib/librte_fib.so.24.0
00:04:12.999  [699/797] Linking target lib/librte_mldev.so.24.0
00:04:12.999  [700/797] Linking target lib/librte_cryptodev.so.24.0
00:04:13.258  [701/797] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols
00:04:13.258  [702/797] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols
00:04:13.258  [703/797] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:04:13.258  [704/797] Generating symbol file lib/librte_compressdev.so.24.0.p/librte_compressdev.so.24.0.symbols
00:04:13.258  [705/797] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:04:13.258  [706/797] Linking target lib/librte_security.so.24.0
00:04:13.258  [707/797] Linking target lib/librte_hash.so.24.0
00:04:13.258  [708/797] Linking target lib/librte_cmdline.so.24.0
00:04:13.258  [709/797] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:04:13.517  [710/797] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols
00:04:13.517  [711/797] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:04:13.517  [712/797] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output)
00:04:13.517  [713/797] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o
00:04:13.517  [714/797] Linking target lib/librte_efd.so.24.0
00:04:13.517  [715/797] Linking target lib/librte_lpm.so.24.0
00:04:13.517  [716/797] Linking target lib/librte_pdcp.so.24.0
00:04:13.517  [717/797] Linking target lib/librte_member.so.24.0
00:04:13.517  [718/797] Linking target lib/librte_ipsec.so.24.0
00:04:13.517  [719/797] Linking target drivers/librte_crypto_ipsec_mb.so.24.0
00:04:13.517  [720/797] Linking target drivers/librte_common_mlx5.so.24.0
00:04:13.517  [721/797] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o
00:04:13.517  [722/797] Linking static target drivers/libtmp_rte_net_i40e.a
00:04:13.517  [723/797] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols
00:04:13.517  [724/797] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols
00:04:13.517  [725/797] Generating symbol file drivers/librte_common_mlx5.so.24.0.p/librte_common_mlx5.so.24.0.symbols
00:04:13.517  [726/797] Linking target drivers/librte_crypto_mlx5.so.24.0
00:04:13.775  [727/797] Generating drivers/rte_net_i40e.pmd.c with a custom command
00:04:13.775  [728/797] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:04:13.775  [729/797] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:04:14.034  [730/797] Linking static target drivers/librte_net_i40e.a
00:04:14.034  [731/797] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:14.034  [732/797] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o
00:04:14.034  [733/797] Linking target lib/librte_ethdev.so.24.0
00:04:14.293  [734/797] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:04:14.293  [735/797] Linking target lib/librte_pcapng.so.24.0
00:04:14.293  [736/797] Linking target lib/librte_ip_frag.so.24.0
00:04:14.293  [737/797] Linking target lib/librte_gro.so.24.0
00:04:14.293  [738/797] Linking target lib/librte_gso.so.24.0
00:04:14.293  [739/797] Linking target lib/librte_bpf.so.24.0
00:04:14.293  [740/797] Linking target lib/librte_power.so.24.0
00:04:14.293  [741/797] Linking target lib/librte_metrics.so.24.0
00:04:14.293  [742/797] Linking target lib/librte_eventdev.so.24.0
00:04:14.293  [743/797] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols
00:04:14.293  [744/797] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols
00:04:14.293  [745/797] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols
00:04:14.293  [746/797] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols
00:04:14.293  [747/797] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols
00:04:14.293  [748/797] Linking target lib/librte_graph.so.24.0
00:04:14.293  [749/797] Linking target lib/librte_latencystats.so.24.0
00:04:14.293  [750/797] Linking target lib/librte_bitratestats.so.24.0
00:04:14.293  [751/797] Linking target lib/librte_pdump.so.24.0
00:04:14.293  [752/797] Linking target lib/librte_dispatcher.so.24.0
00:04:14.293  [753/797] Linking target lib/librte_port.so.24.0
00:04:14.552  [754/797] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output)
00:04:14.552  [755/797] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols
00:04:14.552  [756/797] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols
00:04:14.552  [757/797] Linking target drivers/librte_net_i40e.so.24.0
00:04:14.552  [758/797] Linking target lib/librte_node.so.24.0
00:04:14.552  [759/797] Linking target lib/librte_table.so.24.0
00:04:14.810  [760/797] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols
00:04:14.810  [761/797] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o
00:04:15.069  [762/797] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o
00:04:15.069  [763/797] Linking static target drivers/libtmp_rte_common_qat.a
00:04:15.327  [764/797] Generating drivers/rte_common_qat.pmd.c with a custom command
00:04:15.327  [765/797] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o
00:04:15.327  [766/797] Compiling C object drivers/librte_common_qat.so.24.0.p/meson-generated_.._rte_common_qat.pmd.c.o
00:04:15.327  [767/797] Linking static target drivers/librte_common_qat.a
00:04:15.327  [768/797] Linking target drivers/librte_common_qat.so.24.0
00:04:16.266  [769/797] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o
00:04:21.540  [770/797] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
00:04:21.540  [771/797] Linking static target lib/librte_pipeline.a
00:04:22.476  [772/797] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:04:22.476  [773/797] Linking static target lib/librte_vhost.a
00:04:23.043  [774/797] Linking target app/dpdk-dumpcap
00:04:23.043  [775/797] Linking target app/dpdk-test-sad
00:04:23.043  [776/797] Linking target app/dpdk-pdump
00:04:23.043  [777/797] Linking target app/dpdk-proc-info
00:04:23.043  [778/797] Linking target app/dpdk-test-dma-perf
00:04:23.043  [779/797] Linking target app/dpdk-test-acl
00:04:23.043  [780/797] Linking target app/dpdk-test-flow-perf
00:04:23.043  [781/797] Linking target app/dpdk-test-gpudev
00:04:23.043  [782/797] Linking target app/dpdk-test-fib
00:04:23.043  [783/797] Linking target app/dpdk-test-cmdline
00:04:23.043  [784/797] Linking target app/dpdk-test-regex
00:04:23.044  [785/797] Linking target app/dpdk-test-mldev
00:04:23.044  [786/797] Linking target app/dpdk-test-crypto-perf
00:04:23.044  [787/797] Linking target app/dpdk-graph
00:04:23.044  [788/797] Linking target app/dpdk-test-pipeline
00:04:23.044  [789/797] Linking target app/dpdk-test-security-perf
00:04:23.044  [790/797] Linking target app/dpdk-test-bbdev
00:04:23.044  [791/797] Linking target app/dpdk-test-compress-perf
00:04:23.044  [792/797] Linking target app/dpdk-test-eventdev
00:04:23.044  [793/797] Linking target app/dpdk-testpmd
00:04:23.302  [794/797] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output)
00:04:23.561  [795/797] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:04:23.561  [796/797] Linking target lib/librte_pipeline.so.24.0
00:04:23.561  [797/797] Linking target lib/librte_vhost.so.24.0
00:04:23.561    18:25:09 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s
00:04:23.561   18:25:09 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]]
00:04:23.561   18:25:09 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp -j88 install
00:04:23.561  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp'
00:04:23.561  [0/1] Installing files.
00:04:23.826  Installing subdir /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/timer
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/timer
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.826  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.827  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/service_cores
00:04:23.828  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/service_cores
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/dma
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/dma
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vmdq
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vmdq
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bond
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bond
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bond
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/common
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/common/sse
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/common/neon
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/skeleton
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/skeleton
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.829  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:23.830  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/helloworld
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/helloworld
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.831  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/distributor
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/distributor
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.832  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ntb
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ntb
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/ntb
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:23.833  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:24.094  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:24.094  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb
00:04:24.094  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb
00:04:24.094  Installing lib/librte_log.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_kvargs.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_telemetry.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_eal.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_ring.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_rcu.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_mempool.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_mbuf.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_net.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_meter.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_ethdev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_pci.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_cmdline.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_metrics.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_hash.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_timer.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_acl.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_bbdev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_bitratestats.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_bpf.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_cfgfile.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_compressdev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_cryptodev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_distributor.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_dmadev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_efd.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_eventdev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_dispatcher.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_gpudev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_gro.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_gso.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_ip_frag.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_jobstats.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_latencystats.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_lpm.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_member.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_pcapng.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.094  Installing lib/librte_power.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.095  Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.095  Installing lib/librte_rawdev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.095  Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_regexdev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_mldev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_rib.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_reorder.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_sched.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_security.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_stack.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_vhost.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_ipsec.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_pdcp.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_fib.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_port.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_pdump.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_table.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_pipeline.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_graph.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_node.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_bus_auxiliary.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_bus_auxiliary.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_common_mlx5.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_common_mlx5.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_common_qat.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_common_qat.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_crypto_ipsec_mb.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_crypto_ipsec_mb.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing drivers/librte_crypto_mlx5.a to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.359  Installing drivers/librte_crypto_mlx5.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:04:24.359  Installing app/dpdk-dumpcap to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.359  Installing app/dpdk-graph to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.359  Installing app/dpdk-pdump to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.359  Installing app/dpdk-proc-info to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.359  Installing app/dpdk-test-acl to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.359  Installing app/dpdk-test-bbdev to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.359  Installing app/dpdk-test-cmdline to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-eventdev to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-fib to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-gpudev to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-mldev to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-pipeline to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-testpmd to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-regex to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-sad to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing app/dpdk-test-security-perf to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include/generic
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.360  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.361  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.362  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.363  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/bin
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/pkgconfig
00:04:24.364  Installing /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/pkgconfig
00:04:24.364  Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_log.so.24
00:04:24.364  Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_log.so
00:04:24.364  Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_kvargs.so.24
00:04:24.364  Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_kvargs.so
00:04:24.364  Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_telemetry.so.24
00:04:24.364  Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_telemetry.so
00:04:24.364  Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_eal.so.24
00:04:24.364  Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_eal.so
00:04:24.364  Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ring.so.24
00:04:24.364  Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ring.so
00:04:24.364  Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_rcu.so.24
00:04:24.364  Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_rcu.so
00:04:24.364  Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_mempool.so.24
00:04:24.364  Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_mempool.so
00:04:24.364  Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_mbuf.so.24
00:04:24.364  Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_mbuf.so
00:04:24.364  Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_net.so.24
00:04:24.364  Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_net.so
00:04:24.364  Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_meter.so.24
00:04:24.364  Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_meter.so
00:04:24.364  Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ethdev.so.24
00:04:24.364  Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ethdev.so
00:04:24.364  Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pci.so.24
00:04:24.364  Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pci.so
00:04:24.364  Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_cmdline.so.24
00:04:24.364  Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_cmdline.so
00:04:24.364  Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_metrics.so.24
00:04:24.364  Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_metrics.so
00:04:24.364  Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_hash.so.24
00:04:24.364  Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_hash.so
00:04:24.364  Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_timer.so.24
00:04:24.364  Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_timer.so
00:04:24.364  Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_acl.so.24
00:04:24.364  Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_acl.so
00:04:24.364  Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_bbdev.so.24
00:04:24.364  Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_bbdev.so
00:04:24.364  Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24
00:04:24.364  Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_bitratestats.so
00:04:24.364  Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_bpf.so.24
00:04:24.364  Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_bpf.so
00:04:24.364  Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24
00:04:24.364  Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_cfgfile.so
00:04:24.364  Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_compressdev.so.24
00:04:24.364  Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_compressdev.so
00:04:24.364  Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24
00:04:24.364  Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_cryptodev.so
00:04:24.364  Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_distributor.so.24
00:04:24.364  Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_distributor.so
00:04:24.364  Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_dmadev.so.24
00:04:24.364  Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_dmadev.so
00:04:24.364  Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_efd.so.24
00:04:24.364  Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_efd.so
00:04:24.364  Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_eventdev.so.24
00:04:24.364  Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_eventdev.so
00:04:24.364  Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24
00:04:24.364  Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_dispatcher.so
00:04:24.364  Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_gpudev.so.24
00:04:24.364  Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_gpudev.so
00:04:24.364  Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_gro.so.24
00:04:24.364  Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_gro.so
00:04:24.364  Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_gso.so.24
00:04:24.364  Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_gso.so
00:04:24.364  Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24
00:04:24.364  Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ip_frag.so
00:04:24.364  Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_jobstats.so.24
00:04:24.364  Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_jobstats.so
00:04:24.364  Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_latencystats.so.24
00:04:24.364  Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_latencystats.so
00:04:24.364  Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_lpm.so.24
00:04:24.364  Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_lpm.so
00:04:24.364  Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_member.so.24
00:04:24.364  Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_member.so
00:04:24.364  Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pcapng.so.24
00:04:24.364  Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pcapng.so
00:04:24.365  Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_power.so.24
00:04:24.365  Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_power.so
00:04:24.365  Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_rawdev.so.24
00:04:24.365  Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_rawdev.so
00:04:24.365  Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_regexdev.so.24
00:04:24.365  Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_regexdev.so
00:04:24.365  Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_mldev.so.24
00:04:24.365  Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_mldev.so
00:04:24.365  Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_rib.so.24
00:04:24.365  Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_rib.so
00:04:24.365  Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_reorder.so.24
00:04:24.365  Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_reorder.so
00:04:24.365  Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_sched.so.24
00:04:24.365  Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_sched.so
00:04:24.365  Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_security.so.24
00:04:24.365  Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_security.so
00:04:24.365  Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_stack.so.24
00:04:24.365  Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_stack.so
00:04:24.365  Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_vhost.so.24
00:04:24.365  Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_vhost.so
00:04:24.365  Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ipsec.so.24
00:04:24.365  Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_ipsec.so
00:04:24.365  Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pdcp.so.24
00:04:24.365  Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pdcp.so
00:04:24.365  Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_fib.so.24
00:04:24.365  Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_fib.so
00:04:24.365  Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_port.so.24
00:04:24.365  Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_port.so
00:04:24.365  Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pdump.so.24
00:04:24.365  Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pdump.so
00:04:24.365  Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_table.so.24
00:04:24.365  Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_table.so
00:04:24.365  Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pipeline.so.24
00:04:24.365  Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_pipeline.so
00:04:24.365  Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_graph.so.24
00:04:24.365  Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_graph.so
00:04:24.365  Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_node.so.24
00:04:24.365  Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/librte_node.so
00:04:24.365  Installing symlink pointing to librte_bus_auxiliary.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_auxiliary.so.24
00:04:24.365  Installing symlink pointing to librte_bus_auxiliary.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_auxiliary.so
00:04:24.365  Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24
00:04:24.365  Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:04:24.365  './librte_bus_auxiliary.so' -> 'dpdk/pmds-24.0/librte_bus_auxiliary.so'
00:04:24.365  './librte_bus_auxiliary.so.24' -> 'dpdk/pmds-24.0/librte_bus_auxiliary.so.24'
00:04:24.365  './librte_bus_auxiliary.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_auxiliary.so.24.0'
00:04:24.365  './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so'
00:04:24.365  './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24'
00:04:24.365  './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0'
00:04:24.365  './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so'
00:04:24.365  './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24'
00:04:24.365  './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0'
00:04:24.365  './librte_common_mlx5.so' -> 'dpdk/pmds-24.0/librte_common_mlx5.so'
00:04:24.365  './librte_common_mlx5.so.24' -> 'dpdk/pmds-24.0/librte_common_mlx5.so.24'
00:04:24.365  './librte_common_mlx5.so.24.0' -> 'dpdk/pmds-24.0/librte_common_mlx5.so.24.0'
00:04:24.365  './librte_common_qat.so' -> 'dpdk/pmds-24.0/librte_common_qat.so'
00:04:24.365  './librte_common_qat.so.24' -> 'dpdk/pmds-24.0/librte_common_qat.so.24'
00:04:24.365  './librte_common_qat.so.24.0' -> 'dpdk/pmds-24.0/librte_common_qat.so.24.0'
00:04:24.365  './librte_crypto_ipsec_mb.so' -> 'dpdk/pmds-24.0/librte_crypto_ipsec_mb.so'
00:04:24.365  './librte_crypto_ipsec_mb.so.24' -> 'dpdk/pmds-24.0/librte_crypto_ipsec_mb.so.24'
00:04:24.365  './librte_crypto_ipsec_mb.so.24.0' -> 'dpdk/pmds-24.0/librte_crypto_ipsec_mb.so.24.0'
00:04:24.365  './librte_crypto_mlx5.so' -> 'dpdk/pmds-24.0/librte_crypto_mlx5.so'
00:04:24.365  './librte_crypto_mlx5.so.24' -> 'dpdk/pmds-24.0/librte_crypto_mlx5.so.24'
00:04:24.365  './librte_crypto_mlx5.so.24.0' -> 'dpdk/pmds-24.0/librte_crypto_mlx5.so.24.0'
00:04:24.365  './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so'
00:04:24.365  './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24'
00:04:24.365  './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0'
00:04:24.365  './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so'
00:04:24.365  './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24'
00:04:24.365  './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0'
00:04:24.365  Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24
00:04:24.365  Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:04:24.365  Installing symlink pointing to librte_common_mlx5.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_mlx5.so.24
00:04:24.365  Installing symlink pointing to librte_common_mlx5.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_mlx5.so
00:04:24.365  Installing symlink pointing to librte_common_qat.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_qat.so.24
00:04:24.365  Installing symlink pointing to librte_common_qat.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_qat.so
00:04:24.365  Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24
00:04:24.365  Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:04:24.365  Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24
00:04:24.365  Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:04:24.365  Installing symlink pointing to librte_crypto_ipsec_mb.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_ipsec_mb.so.24
00:04:24.365  Installing symlink pointing to librte_crypto_ipsec_mb.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_ipsec_mb.so
00:04:24.365  Installing symlink pointing to librte_crypto_mlx5.so.24.0 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_mlx5.so.24
00:04:24.365  Installing symlink pointing to librte_crypto_mlx5.so.24 to /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_mlx5.so
00:04:24.365  Running custom install script '/bin/sh /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0'
00:04:24.625   18:25:10 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat
00:04:24.625   18:25:10 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:04:24.625  
00:04:24.625  real	2m1.274s
00:04:24.625  user	20m29.881s
00:04:24.625  sys	2m28.106s
00:04:24.625   18:25:10 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:24.625   18:25:10 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x
00:04:24.625  ************************************
00:04:24.625  END TEST build_native_dpdk
00:04:24.625  ************************************
00:04:24.625   18:25:10  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:04:24.625   18:25:10  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:04:24.625   18:25:10  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:04:24.625   18:25:10  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:04:24.625   18:25:10  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:04:24.625   18:25:10  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:04:24.625   18:25:10  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:04:24.625   18:25:10  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build --with-sma --with-crypto --with-shared
00:04:24.625  Using /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/pkgconfig for additional libs...
00:04:24.625  DPDK libraries: /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib
00:04:24.625  DPDK includes: //var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/include
00:04:24.625  Using default SPDK env in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk
00:04:24.884  Using 'verbs' RDMA provider
00:04:33.575  Configuring ISA-L (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal.log)...done.
00:04:41.703  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:04:41.703  Creating mk/config.mk...done.
00:04:41.703  Creating mk/cc.flags.mk...done.
00:04:41.703  Type 'make' to build.
00:04:41.703   18:25:27  -- spdk/autobuild.sh@70 -- $ run_test make make -j88
00:04:41.703   18:25:27  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:04:41.703   18:25:27  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:04:41.703   18:25:27  -- common/autotest_common.sh@10 -- $ set +x
00:04:41.703  ************************************
00:04:41.703  START TEST make
00:04:41.703  ************************************
00:04:41.703   18:25:27 make -- common/autotest_common.sh@1129 -- $ make -j88
00:04:41.703  make[1]: Nothing to be done for 'all'.
00:04:42.645  The Meson build system
00:04:42.645  Version: 1.5.0
00:04:42.645  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user
00:04:42.645  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:04:42.646  Build type: native build
00:04:42.646  Project name: libvfio-user
00:04:42.646  Project version: 0.0.1
00:04:42.646  C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:04:42.646  C linker for the host machine: gcc ld.bfd 2.40-14
00:04:42.646  Host machine cpu family: x86_64
00:04:42.646  Host machine cpu: x86_64
00:04:42.646  Run-time dependency threads found: YES
00:04:42.646  Library dl found: YES
00:04:42.646  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:04:42.646  Run-time dependency json-c found: YES 0.17
00:04:42.646  Run-time dependency cmocka found: YES 1.1.7
00:04:42.646  Program pytest-3 found: NO
00:04:42.646  Program flake8 found: NO
00:04:42.646  Program misspell-fixer found: NO
00:04:42.646  Program restructuredtext-lint found: NO
00:04:42.646  Program valgrind found: YES (/usr/bin/valgrind)
00:04:42.646  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:04:42.646  Compiler for C supports arguments -Wmissing-declarations: YES 
00:04:42.646  Compiler for C supports arguments -Wwrite-strings: YES 
00:04:42.646  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:04:42.646  Program test-lspci.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-lspci.sh)
00:04:42.646  Program test-linkage.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-linkage.sh)
00:04:42.646  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:04:42.646  Build targets in project: 8
00:04:42.646  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:04:42.646   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:04:42.646  
00:04:42.646  libvfio-user 0.0.1
00:04:42.646  
00:04:42.646    User defined options
00:04:42.646      buildtype      : debug
00:04:42.646      default_library: shared
00:04:42.646      libdir         : /usr/local/lib
00:04:42.646  
00:04:42.646  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:04:43.597  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:04:43.858  [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:04:43.858  [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:04:43.858  [3/37] Compiling C object samples/null.p/null.c.o
00:04:43.858  [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:04:43.858  [5/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:04:43.858  [6/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:04:43.858  [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:04:43.858  [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:04:43.858  [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:04:43.858  [10/37] Compiling C object samples/lspci.p/lspci.c.o
00:04:43.858  [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:04:43.858  [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:04:43.858  [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:04:43.858  [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:04:43.858  [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:04:43.858  [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:04:43.858  [17/37] Compiling C object test/unit_tests.p/mocks.c.o
00:04:43.858  [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:04:43.858  [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:04:43.858  [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:04:43.858  [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:04:43.858  [22/37] Compiling C object samples/server.p/server.c.o
00:04:43.858  [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:04:43.858  [24/37] Compiling C object samples/client.p/client.c.o
00:04:43.858  [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:04:43.858  [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:04:43.858  [27/37] Linking target samples/client
00:04:44.119  [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:04:44.119  [29/37] Linking target lib/libvfio-user.so.0.0.1
00:04:44.119  [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:04:44.119  [31/37] Linking target test/unit_tests
00:04:44.383  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:04:44.383  [33/37] Linking target samples/null
00:04:44.383  [34/37] Linking target samples/lspci
00:04:44.383  [35/37] Linking target samples/server
00:04:44.383  [36/37] Linking target samples/gpio-pci-idio-16
00:04:44.383  [37/37] Linking target samples/shadow_ioeventfd_server
00:04:44.383  INFO: autodetecting backend as ninja
00:04:44.383  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:04:44.383  DESTDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:04:45.327  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:04:45.327  ninja: no work to do.
00:05:24.049    CC lib/ut_mock/mock.o
00:05:24.049    CC lib/log/log.o
00:05:24.049    CC lib/log/log_flags.o
00:05:24.049    CC lib/log/log_deprecated.o
00:05:24.049    CC lib/ut/ut.o
00:05:24.049    LIB libspdk_ut_mock.a
00:05:24.049    LIB libspdk_ut.a
00:05:24.049    LIB libspdk_log.a
00:05:24.049    SO libspdk_ut_mock.so.6.0
00:05:24.049    SO libspdk_ut.so.2.0
00:05:24.049    SO libspdk_log.so.7.1
00:05:24.049    SYMLINK libspdk_ut_mock.so
00:05:24.049    SYMLINK libspdk_ut.so
00:05:24.049    SYMLINK libspdk_log.so
00:05:24.049    CC lib/dma/dma.o
00:05:24.049    CC lib/ioat/ioat.o
00:05:24.049    CXX lib/trace_parser/trace.o
00:05:24.049    CC lib/util/base64.o
00:05:24.049    CC lib/util/bit_array.o
00:05:24.049    CC lib/util/cpuset.o
00:05:24.049    CC lib/util/crc16.o
00:05:24.049    CC lib/util/crc32c.o
00:05:24.049    CC lib/util/crc32.o
00:05:24.049    CC lib/util/crc32_ieee.o
00:05:24.049    CC lib/util/crc64.o
00:05:24.049    CC lib/util/dif.o
00:05:24.049    CC lib/util/fd.o
00:05:24.049    CC lib/util/fd_group.o
00:05:24.049    CC lib/util/hexlify.o
00:05:24.049    CC lib/util/file.o
00:05:24.049    CC lib/util/iov.o
00:05:24.049    CC lib/util/math.o
00:05:24.049    CC lib/util/net.o
00:05:24.049    CC lib/util/pipe.o
00:05:24.049    CC lib/util/strerror_tls.o
00:05:24.049    CC lib/util/string.o
00:05:24.049    CC lib/util/uuid.o
00:05:24.049    CC lib/util/xor.o
00:05:24.049    CC lib/util/zipf.o
00:05:24.049    CC lib/util/md5.o
00:05:24.049    CC lib/vfio_user/host/vfio_user_pci.o
00:05:24.049    CC lib/vfio_user/host/vfio_user.o
00:05:24.049    LIB libspdk_dma.a
00:05:24.049    SO libspdk_dma.so.5.0
00:05:24.049    SYMLINK libspdk_dma.so
00:05:24.049    LIB libspdk_ioat.a
00:05:24.049    SO libspdk_ioat.so.7.0
00:05:24.049    LIB libspdk_vfio_user.a
00:05:24.049    SYMLINK libspdk_ioat.so
00:05:24.049    SO libspdk_vfio_user.so.5.0
00:05:24.049    SYMLINK libspdk_vfio_user.so
00:05:24.049    LIB libspdk_util.a
00:05:24.049    SO libspdk_util.so.10.1
00:05:24.049    SYMLINK libspdk_util.so
00:05:24.049    LIB libspdk_trace_parser.a
00:05:24.049    SO libspdk_trace_parser.so.6.0
00:05:24.049    CC lib/idxd/idxd.o
00:05:24.049    CC lib/conf/conf.o
00:05:24.049    CC lib/vmd/vmd.o
00:05:24.049    CC lib/idxd/idxd_user.o
00:05:24.049    CC lib/rdma_utils/rdma_utils.o
00:05:24.049    CC lib/vmd/led.o
00:05:24.049    CC lib/idxd/idxd_kernel.o
00:05:24.049    CC lib/env_dpdk/memory.o
00:05:24.049    CC lib/env_dpdk/pci.o
00:05:24.049    CC lib/env_dpdk/init.o
00:05:24.049    CC lib/env_dpdk/env.o
00:05:24.049    CC lib/env_dpdk/threads.o
00:05:24.049    CC lib/env_dpdk/pci_ioat.o
00:05:24.049    CC lib/env_dpdk/pci_virtio.o
00:05:24.049    CC lib/env_dpdk/pci_vmd.o
00:05:24.049    CC lib/env_dpdk/pci_idxd.o
00:05:24.049    CC lib/env_dpdk/pci_event.o
00:05:24.049    CC lib/json/json_parse.o
00:05:24.049    CC lib/env_dpdk/sigbus_handler.o
00:05:24.049    CC lib/env_dpdk/pci_dpdk.o
00:05:24.049    CC lib/json/json_util.o
00:05:24.049    CC lib/env_dpdk/pci_dpdk_2207.o
00:05:24.049    CC lib/env_dpdk/pci_dpdk_2211.o
00:05:24.049    CC lib/json/json_write.o
00:05:24.049    SYMLINK libspdk_trace_parser.so
00:05:24.049    LIB libspdk_conf.a
00:05:24.049    SO libspdk_conf.so.6.0
00:05:24.049    LIB libspdk_rdma_utils.a
00:05:24.049    SO libspdk_rdma_utils.so.1.0
00:05:24.049    SYMLINK libspdk_conf.so
00:05:24.049    LIB libspdk_json.a
00:05:24.049    SO libspdk_json.so.6.0
00:05:24.049    SYMLINK libspdk_rdma_utils.so
00:05:24.049    SYMLINK libspdk_json.so
00:05:24.049    CC lib/rdma_provider/common.o
00:05:24.049    CC lib/rdma_provider/rdma_provider_verbs.o
00:05:24.049    CC lib/jsonrpc/jsonrpc_server.o
00:05:24.049    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:05:24.049    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:05:24.049    CC lib/jsonrpc/jsonrpc_client.o
00:05:24.049    LIB libspdk_idxd.a
00:05:24.049    LIB libspdk_rdma_provider.a
00:05:24.049    SO libspdk_idxd.so.12.1
00:05:24.049    LIB libspdk_vmd.a
00:05:24.049    SO libspdk_rdma_provider.so.7.0
00:05:24.049    SO libspdk_vmd.so.6.0
00:05:24.049    SYMLINK libspdk_idxd.so
00:05:24.049    SYMLINK libspdk_rdma_provider.so
00:05:24.049    SYMLINK libspdk_vmd.so
00:05:24.049    LIB libspdk_jsonrpc.a
00:05:24.049    SO libspdk_jsonrpc.so.6.0
00:05:24.049    SYMLINK libspdk_jsonrpc.so
00:05:24.049    CC lib/rpc/rpc.o
00:05:24.049    LIB libspdk_rpc.a
00:05:24.049    SO libspdk_rpc.so.6.0
00:05:24.049    SYMLINK libspdk_rpc.so
00:05:24.049    CC lib/keyring/keyring.o
00:05:24.049    CC lib/keyring/keyring_rpc.o
00:05:24.049    CC lib/notify/notify.o
00:05:24.049    CC lib/notify/notify_rpc.o
00:05:24.049    CC lib/trace/trace.o
00:05:24.049    CC lib/trace/trace_flags.o
00:05:24.049    CC lib/trace/trace_rpc.o
00:05:24.049    LIB libspdk_env_dpdk.a
00:05:24.049    SO libspdk_env_dpdk.so.15.1
00:05:24.049    LIB libspdk_notify.a
00:05:24.049    SO libspdk_notify.so.6.0
00:05:24.049    SYMLINK libspdk_notify.so
00:05:24.049    LIB libspdk_keyring.a
00:05:24.049    SYMLINK libspdk_env_dpdk.so
00:05:24.049    SO libspdk_keyring.so.2.0
00:05:24.049    LIB libspdk_trace.a
00:05:24.049    SO libspdk_trace.so.11.0
00:05:24.049    SYMLINK libspdk_keyring.so
00:05:24.049    SYMLINK libspdk_trace.so
00:05:24.049    CC lib/thread/thread.o
00:05:24.049    CC lib/thread/iobuf.o
00:05:24.049    CC lib/sock/sock.o
00:05:24.049    CC lib/sock/sock_rpc.o
00:05:24.049    LIB libspdk_sock.a
00:05:24.049    SO libspdk_sock.so.10.0
00:05:24.049    SYMLINK libspdk_sock.so
00:05:24.049    CC lib/nvme/nvme_ctrlr_cmd.o
00:05:24.049    CC lib/nvme/nvme_ctrlr.o
00:05:24.049    CC lib/nvme/nvme_fabric.o
00:05:24.049    CC lib/nvme/nvme_ns_cmd.o
00:05:24.049    CC lib/nvme/nvme_ns.o
00:05:24.049    CC lib/nvme/nvme_pcie_common.o
00:05:24.049    CC lib/nvme/nvme_pcie.o
00:05:24.049    CC lib/nvme/nvme_qpair.o
00:05:24.049    CC lib/nvme/nvme.o
00:05:24.050    CC lib/nvme/nvme_quirks.o
00:05:24.050    CC lib/nvme/nvme_transport.o
00:05:24.050    CC lib/nvme/nvme_discovery.o
00:05:24.050    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:05:24.050    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:05:24.050    CC lib/nvme/nvme_tcp.o
00:05:24.050    CC lib/nvme/nvme_opal.o
00:05:24.050    CC lib/nvme/nvme_io_msg.o
00:05:24.050    CC lib/nvme/nvme_poll_group.o
00:05:24.050    CC lib/nvme/nvme_zns.o
00:05:24.050    CC lib/nvme/nvme_stubs.o
00:05:24.050    CC lib/nvme/nvme_auth.o
00:05:24.050    CC lib/nvme/nvme_cuse.o
00:05:24.050    CC lib/nvme/nvme_vfio_user.o
00:05:24.050    CC lib/nvme/nvme_rdma.o
00:05:24.050    LIB libspdk_thread.a
00:05:24.050    SO libspdk_thread.so.11.0
00:05:24.050    SYMLINK libspdk_thread.so
00:05:24.050    CC lib/blob/blobstore.o
00:05:24.050    CC lib/blob/request.o
00:05:24.050    CC lib/blob/zeroes.o
00:05:24.050    CC lib/virtio/virtio.o
00:05:24.050    CC lib/blob/blob_bs_dev.o
00:05:24.050    CC lib/virtio/virtio_vhost_user.o
00:05:24.050    CC lib/virtio/virtio_vfio_user.o
00:05:24.050    CC lib/virtio/virtio_pci.o
00:05:24.050    CC lib/accel/accel.o
00:05:24.050    CC lib/accel/accel_rpc.o
00:05:24.050    CC lib/vfu_tgt/tgt_rpc.o
00:05:24.050    CC lib/accel/accel_sw.o
00:05:24.050    CC lib/vfu_tgt/tgt_endpoint.o
00:05:24.050    CC lib/init/json_config.o
00:05:24.050    CC lib/init/subsystem.o
00:05:24.050    CC lib/init/rpc.o
00:05:24.050    CC lib/fsdev/fsdev.o
00:05:24.050    CC lib/init/subsystem_rpc.o
00:05:24.050    CC lib/fsdev/fsdev_io.o
00:05:24.050    CC lib/fsdev/fsdev_rpc.o
00:05:24.050    LIB libspdk_init.a
00:05:24.050    SO libspdk_init.so.6.0
00:05:24.050    SYMLINK libspdk_init.so
00:05:24.050    LIB libspdk_vfu_tgt.a
00:05:24.050    LIB libspdk_virtio.a
00:05:24.050    SO libspdk_vfu_tgt.so.3.0
00:05:24.050    SO libspdk_virtio.so.7.0
00:05:24.050    SYMLINK libspdk_vfu_tgt.so
00:05:24.050    SYMLINK libspdk_virtio.so
00:05:24.050    CC lib/event/app.o
00:05:24.050    CC lib/event/reactor.o
00:05:24.050    CC lib/event/log_rpc.o
00:05:24.050    CC lib/event/scheduler_static.o
00:05:24.050    CC lib/event/app_rpc.o
00:05:24.050    LIB libspdk_fsdev.a
00:05:24.050    SO libspdk_fsdev.so.2.0
00:05:24.050    SYMLINK libspdk_fsdev.so
00:05:24.050    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:05:24.050    LIB libspdk_event.a
00:05:24.050    SO libspdk_event.so.14.0
00:05:24.050    SYMLINK libspdk_event.so
00:05:24.050    LIB libspdk_nvme.a
00:05:24.050    LIB libspdk_accel.a
00:05:24.309    SO libspdk_accel.so.16.0
00:05:24.309    SO libspdk_nvme.so.15.0
00:05:24.309    SYMLINK libspdk_accel.so
00:05:24.309    CC lib/bdev/bdev.o
00:05:24.309    CC lib/bdev/bdev_rpc.o
00:05:24.310    CC lib/bdev/bdev_zone.o
00:05:24.310    CC lib/bdev/part.o
00:05:24.310    CC lib/bdev/scsi_nvme.o
00:05:24.569    SYMLINK libspdk_nvme.so
00:05:24.569    LIB libspdk_fuse_dispatcher.a
00:05:24.569    SO libspdk_fuse_dispatcher.so.1.0
00:05:24.569    SYMLINK libspdk_fuse_dispatcher.so
00:05:26.474    LIB libspdk_blob.a
00:05:26.474    SO libspdk_blob.so.11.0
00:05:26.474    SYMLINK libspdk_blob.so
00:05:26.474    CC lib/lvol/lvol.o
00:05:26.474    CC lib/blobfs/blobfs.o
00:05:26.474    CC lib/blobfs/tree.o
00:05:27.041    LIB libspdk_bdev.a
00:05:27.041    SO libspdk_bdev.so.17.0
00:05:27.379    SYMLINK libspdk_bdev.so
00:05:27.379    CC lib/scsi/dev.o
00:05:27.379    CC lib/scsi/port.o
00:05:27.379    CC lib/scsi/scsi.o
00:05:27.379    CC lib/scsi/lun.o
00:05:27.379    CC lib/scsi/scsi_bdev.o
00:05:27.379    CC lib/scsi/scsi_pr.o
00:05:27.379    CC lib/scsi/scsi_rpc.o
00:05:27.379    CC lib/nvmf/ctrlr.o
00:05:27.379    CC lib/nvmf/ctrlr_discovery.o
00:05:27.379    CC lib/scsi/task.o
00:05:27.379    CC lib/nvmf/ctrlr_bdev.o
00:05:27.379    CC lib/nvmf/subsystem.o
00:05:27.379    CC lib/nvmf/nvmf.o
00:05:27.379    CC lib/nbd/nbd_rpc.o
00:05:27.379    CC lib/nbd/nbd.o
00:05:27.379    CC lib/ftl/ftl_core.o
00:05:27.379    CC lib/nvmf/nvmf_rpc.o
00:05:27.379    CC lib/ftl/ftl_init.o
00:05:27.379    CC lib/nvmf/transport.o
00:05:27.379    CC lib/ftl/ftl_layout.o
00:05:27.380    CC lib/nvmf/tcp.o
00:05:27.380    CC lib/ftl/ftl_debug.o
00:05:27.380    CC lib/nvmf/stubs.o
00:05:27.380    CC lib/ublk/ublk.o
00:05:27.380    CC lib/ftl/ftl_io.o
00:05:27.380    CC lib/nvmf/mdns_server.o
00:05:27.380    CC lib/ftl/ftl_sb.o
00:05:27.380    CC lib/ublk/ublk_rpc.o
00:05:27.380    CC lib/ftl/ftl_l2p.o
00:05:27.380    CC lib/nvmf/vfio_user.o
00:05:27.380    CC lib/nvmf/rdma.o
00:05:27.380    CC lib/ftl/ftl_l2p_flat.o
00:05:27.380    CC lib/nvmf/auth.o
00:05:27.380    CC lib/ftl/ftl_nv_cache.o
00:05:27.380    CC lib/ftl/ftl_band.o
00:05:27.380    CC lib/ftl/ftl_band_ops.o
00:05:27.380    CC lib/ftl/ftl_rq.o
00:05:27.380    CC lib/ftl/ftl_writer.o
00:05:27.380    CC lib/ftl/ftl_reloc.o
00:05:27.380    CC lib/ftl/ftl_l2p_cache.o
00:05:27.380    CC lib/ftl/ftl_p2l.o
00:05:27.380    CC lib/ftl/ftl_p2l_log.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_startup.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_md.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_misc.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_band.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:05:27.380    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:05:27.380    CC lib/ftl/utils/ftl_conf.o
00:05:27.380    CC lib/ftl/utils/ftl_md.o
00:05:27.380    CC lib/ftl/utils/ftl_mempool.o
00:05:27.380    CC lib/ftl/utils/ftl_bitmap.o
00:05:27.380    CC lib/ftl/utils/ftl_property.o
00:05:27.380    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:05:27.380    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:05:27.380    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:05:27.380    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:05:27.380    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:05:27.380    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:05:27.380    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:05:27.380    CC lib/ftl/upgrade/ftl_sb_v3.o
00:05:27.380    CC lib/ftl/upgrade/ftl_sb_v5.o
00:05:27.380    CC lib/ftl/nvc/ftl_nvc_dev.o
00:05:27.380    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:05:27.380    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:05:27.380    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:05:27.380    CC lib/ftl/base/ftl_base_dev.o
00:05:27.380    CC lib/ftl/base/ftl_base_bdev.o
00:05:27.380    CC lib/ftl/ftl_trace.o
00:05:27.692    LIB libspdk_blobfs.a
00:05:27.692    SO libspdk_blobfs.so.10.0
00:05:27.692    SYMLINK libspdk_blobfs.so
00:05:27.692    LIB libspdk_lvol.a
00:05:27.990    SO libspdk_lvol.so.10.0
00:05:27.990    SYMLINK libspdk_lvol.so
00:05:28.293    LIB libspdk_scsi.a
00:05:28.293    SO libspdk_scsi.so.9.0
00:05:28.293    LIB libspdk_nbd.a
00:05:28.293    SO libspdk_nbd.so.7.0
00:05:28.293    SYMLINK libspdk_scsi.so
00:05:28.293    SYMLINK libspdk_nbd.so
00:05:28.596    LIB libspdk_ublk.a
00:05:28.596    CC lib/iscsi/conn.o
00:05:28.596    CC lib/vhost/vhost.o
00:05:28.596    CC lib/vhost/vhost_rpc.o
00:05:28.596    CC lib/iscsi/iscsi.o
00:05:28.596    CC lib/iscsi/init_grp.o
00:05:28.596    CC lib/vhost/vhost_scsi.o
00:05:28.596    CC lib/iscsi/param.o
00:05:28.596    CC lib/iscsi/portal_grp.o
00:05:28.596    CC lib/vhost/rte_vhost_user.o
00:05:28.596    CC lib/iscsi/tgt_node.o
00:05:28.596    CC lib/vhost/vhost_blk.o
00:05:28.596    CC lib/iscsi/iscsi_subsystem.o
00:05:28.596    CC lib/iscsi/iscsi_rpc.o
00:05:28.596    CC lib/iscsi/task.o
00:05:28.596    SO libspdk_ublk.so.3.0
00:05:28.596    SYMLINK libspdk_ublk.so
00:05:28.882    LIB libspdk_ftl.a
00:05:28.882    SO libspdk_ftl.so.9.0
00:05:29.451    SYMLINK libspdk_ftl.so
00:05:29.451    LIB libspdk_vhost.a
00:05:29.710    SO libspdk_vhost.so.8.0
00:05:29.710    SYMLINK libspdk_vhost.so
00:05:29.969    LIB libspdk_iscsi.a
00:05:29.969    SO libspdk_iscsi.so.8.0
00:05:30.228    LIB libspdk_nvmf.a
00:05:30.228    SYMLINK libspdk_iscsi.so
00:05:30.228    SO libspdk_nvmf.so.20.0
00:05:30.487    SYMLINK libspdk_nvmf.so
00:05:30.746    CC module/env_dpdk/env_dpdk_rpc.o
00:05:30.746    CC module/vfu_device/vfu_virtio.o
00:05:30.746    CC module/vfu_device/vfu_virtio_blk.o
00:05:30.746    CC module/vfu_device/vfu_virtio_scsi.o
00:05:30.746    CC module/vfu_device/vfu_virtio_rpc.o
00:05:30.746    CC module/vfu_device/vfu_virtio_fs.o
00:05:30.746    CC module/scheduler/dynamic/scheduler_dynamic.o
00:05:30.746    CC module/scheduler/gscheduler/gscheduler.o
00:05:30.746    CC module/accel/ioat/accel_ioat.o
00:05:30.746    CC module/accel/ioat/accel_ioat_rpc.o
00:05:30.746    CC module/blob/bdev/blob_bdev.o
00:05:30.746    CC module/keyring/linux/keyring.o
00:05:30.746    CC module/keyring/file/keyring.o
00:05:30.746    CC module/keyring/linux/keyring_rpc.o
00:05:30.746    CC module/keyring/file/keyring_rpc.o
00:05:30.746    CC module/accel/iaa/accel_iaa.o
00:05:30.746    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:05:30.747    CC module/accel/iaa/accel_iaa_rpc.o
00:05:30.747    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o
00:05:30.747    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o
00:05:30.747    CC module/fsdev/aio/fsdev_aio.o
00:05:30.747    CC module/fsdev/aio/fsdev_aio_rpc.o
00:05:30.747    CC module/fsdev/aio/linux_aio_mgr.o
00:05:30.747    CC module/accel/dsa/accel_dsa_rpc.o
00:05:30.747    CC module/accel/dsa/accel_dsa.o
00:05:30.747    CC module/accel/error/accel_error.o
00:05:30.747    CC module/sock/posix/posix.o
00:05:30.747    CC module/accel/error/accel_error_rpc.o
00:05:30.747    LIB libspdk_env_dpdk_rpc.a
00:05:30.747    SO libspdk_env_dpdk_rpc.so.6.0
00:05:31.006    SYMLINK libspdk_env_dpdk_rpc.so
00:05:31.006    LIB libspdk_keyring_linux.a
00:05:31.006    LIB libspdk_scheduler_gscheduler.a
00:05:31.006    LIB libspdk_keyring_file.a
00:05:31.006    LIB libspdk_scheduler_dpdk_governor.a
00:05:31.006    SO libspdk_keyring_linux.so.1.0
00:05:31.006    SO libspdk_scheduler_gscheduler.so.4.0
00:05:31.006    SO libspdk_keyring_file.so.2.0
00:05:31.006    SO libspdk_scheduler_dpdk_governor.so.4.0
00:05:31.006    LIB libspdk_scheduler_dynamic.a
00:05:31.006    LIB libspdk_accel_ioat.a
00:05:31.006    SO libspdk_accel_ioat.so.6.0
00:05:31.006    SO libspdk_scheduler_dynamic.so.4.0
00:05:31.006    LIB libspdk_accel_iaa.a
00:05:31.006    SYMLINK libspdk_scheduler_gscheduler.so
00:05:31.006    SYMLINK libspdk_keyring_linux.so
00:05:31.006    LIB libspdk_accel_error.a
00:05:31.006    SYMLINK libspdk_keyring_file.so
00:05:31.006    SYMLINK libspdk_scheduler_dpdk_governor.so
00:05:31.006    SO libspdk_accel_error.so.2.0
00:05:31.006    SO libspdk_accel_iaa.so.3.0
00:05:31.006    SYMLINK libspdk_scheduler_dynamic.so
00:05:31.006    SYMLINK libspdk_accel_ioat.so
00:05:31.006    LIB libspdk_blob_bdev.a
00:05:31.006    SYMLINK libspdk_accel_error.so
00:05:31.006    SYMLINK libspdk_accel_iaa.so
00:05:31.265    LIB libspdk_accel_dsa.a
00:05:31.265    SO libspdk_blob_bdev.so.11.0
00:05:31.265    SO libspdk_accel_dsa.so.5.0
00:05:31.265    SYMLINK libspdk_blob_bdev.so
00:05:31.265    SYMLINK libspdk_accel_dsa.so
00:05:31.524    CC module/bdev/lvol/vbdev_lvol.o
00:05:31.524    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:05:31.524    CC module/bdev/split/vbdev_split.o
00:05:31.524    CC module/bdev/gpt/gpt.o
00:05:31.524    CC module/bdev/split/vbdev_split_rpc.o
00:05:31.524    CC module/bdev/delay/vbdev_delay.o
00:05:31.524    CC module/bdev/gpt/vbdev_gpt.o
00:05:31.524    CC module/bdev/delay/vbdev_delay_rpc.o
00:05:31.524    CC module/bdev/error/vbdev_error.o
00:05:31.524    CC module/bdev/malloc/bdev_malloc.o
00:05:31.524    CC module/bdev/malloc/bdev_malloc_rpc.o
00:05:31.524    CC module/bdev/error/vbdev_error_rpc.o
00:05:31.524    CC module/bdev/passthru/vbdev_passthru.o
00:05:31.524    CC module/bdev/raid/bdev_raid.o
00:05:31.524    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:05:31.524    CC module/bdev/raid/bdev_raid_rpc.o
00:05:31.524    CC module/bdev/nvme/bdev_nvme.o
00:05:31.524    CC module/bdev/raid/bdev_raid_sb.o
00:05:31.524    CC module/blobfs/bdev/blobfs_bdev.o
00:05:31.524    CC module/bdev/raid/raid0.o
00:05:31.524    CC module/bdev/nvme/bdev_nvme_rpc.o
00:05:31.524    CC module/bdev/iscsi/bdev_iscsi.o
00:05:31.524    CC module/bdev/nvme/nvme_rpc.o
00:05:31.524    CC module/bdev/raid/concat.o
00:05:31.524    CC module/bdev/zone_block/vbdev_zone_block.o
00:05:31.524    CC module/bdev/raid/raid1.o
00:05:31.524    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:05:31.524    CC module/bdev/nvme/bdev_mdns_client.o
00:05:31.524    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:05:31.524    CC module/bdev/crypto/vbdev_crypto.o
00:05:31.524    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:05:31.524    CC module/bdev/aio/bdev_aio.o
00:05:31.524    CC module/bdev/aio/bdev_aio_rpc.o
00:05:31.524    CC module/bdev/virtio/bdev_virtio_scsi.o
00:05:31.524    CC module/bdev/crypto/vbdev_crypto_rpc.o
00:05:31.524    CC module/bdev/nvme/vbdev_opal.o
00:05:31.524    CC module/bdev/null/bdev_null.o
00:05:31.524    CC module/bdev/null/bdev_null_rpc.o
00:05:31.524    CC module/bdev/nvme/vbdev_opal_rpc.o
00:05:31.524    CC module/bdev/virtio/bdev_virtio_blk.o
00:05:31.524    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:05:31.524    CC module/bdev/virtio/bdev_virtio_rpc.o
00:05:31.524    CC module/bdev/ftl/bdev_ftl.o
00:05:31.524    CC module/bdev/ftl/bdev_ftl_rpc.o
00:05:31.524    LIB libspdk_vfu_device.a
00:05:31.524    SO libspdk_vfu_device.so.3.0
00:05:31.524    LIB libspdk_fsdev_aio.a
00:05:31.782    SYMLINK libspdk_vfu_device.so
00:05:31.782    SO libspdk_fsdev_aio.so.1.0
00:05:31.782    LIB libspdk_sock_posix.a
00:05:31.782    SO libspdk_sock_posix.so.6.0
00:05:31.782    SYMLINK libspdk_fsdev_aio.so
00:05:31.782    LIB libspdk_bdev_split.a
00:05:31.782    SO libspdk_bdev_split.so.6.0
00:05:31.782    SYMLINK libspdk_sock_posix.so
00:05:31.782    LIB libspdk_bdev_error.a
00:05:31.782    LIB libspdk_blobfs_bdev.a
00:05:31.782    LIB libspdk_bdev_gpt.a
00:05:31.782    SYMLINK libspdk_bdev_split.so
00:05:31.782    SO libspdk_bdev_error.so.6.0
00:05:31.782    SO libspdk_blobfs_bdev.so.6.0
00:05:31.782    LIB libspdk_bdev_null.a
00:05:31.782    SO libspdk_bdev_gpt.so.6.0
00:05:31.782    SO libspdk_bdev_null.so.6.0
00:05:31.782    LIB libspdk_bdev_ftl.a
00:05:31.782    LIB libspdk_bdev_passthru.a
00:05:31.782    SYMLINK libspdk_blobfs_bdev.so
00:05:32.040    SYMLINK libspdk_bdev_error.so
00:05:32.040    SYMLINK libspdk_bdev_gpt.so
00:05:32.040    SO libspdk_bdev_ftl.so.6.0
00:05:32.040    SO libspdk_bdev_passthru.so.6.0
00:05:32.040    LIB libspdk_bdev_zone_block.a
00:05:32.040    LIB libspdk_bdev_aio.a
00:05:32.040    SYMLINK libspdk_bdev_null.so
00:05:32.040    LIB libspdk_bdev_crypto.a
00:05:32.040    SO libspdk_bdev_zone_block.so.6.0
00:05:32.040    LIB libspdk_bdev_iscsi.a
00:05:32.040    SO libspdk_bdev_aio.so.6.0
00:05:32.040    LIB libspdk_bdev_malloc.a
00:05:32.040    SYMLINK libspdk_bdev_ftl.so
00:05:32.040    SO libspdk_bdev_crypto.so.6.0
00:05:32.040    SYMLINK libspdk_bdev_passthru.so
00:05:32.040    SO libspdk_bdev_iscsi.so.6.0
00:05:32.041    SO libspdk_bdev_malloc.so.6.0
00:05:32.041    SYMLINK libspdk_bdev_zone_block.so
00:05:32.041    SYMLINK libspdk_bdev_aio.so
00:05:32.041    SYMLINK libspdk_bdev_crypto.so
00:05:32.041    SYMLINK libspdk_bdev_iscsi.so
00:05:32.041    SYMLINK libspdk_bdev_malloc.so
00:05:32.041    LIB libspdk_bdev_delay.a
00:05:32.041    SO libspdk_bdev_delay.so.6.0
00:05:32.041    LIB libspdk_bdev_lvol.a
00:05:32.041    SO libspdk_bdev_lvol.so.6.0
00:05:32.041    SYMLINK libspdk_bdev_delay.so
00:05:32.299    SYMLINK libspdk_bdev_lvol.so
00:05:32.299    LIB libspdk_bdev_virtio.a
00:05:32.299    SO libspdk_bdev_virtio.so.6.0
00:05:32.299    SYMLINK libspdk_bdev_virtio.so
00:05:32.558    LIB libspdk_accel_dpdk_cryptodev.a
00:05:32.558    SO libspdk_accel_dpdk_cryptodev.so.3.0
00:05:32.558    SYMLINK libspdk_accel_dpdk_cryptodev.so
00:05:32.558    LIB libspdk_bdev_raid.a
00:05:32.817    SO libspdk_bdev_raid.so.6.0
00:05:32.817    SYMLINK libspdk_bdev_raid.so
00:05:34.196    LIB libspdk_bdev_nvme.a
00:05:34.196    SO libspdk_bdev_nvme.so.7.1
00:05:34.196    SYMLINK libspdk_bdev_nvme.so
00:05:34.760    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:05:34.760    CC module/event/subsystems/fsdev/fsdev.o
00:05:34.760    CC module/event/subsystems/sock/sock.o
00:05:34.760    CC module/event/subsystems/scheduler/scheduler.o
00:05:34.760    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:05:34.760    CC module/event/subsystems/iobuf/iobuf.o
00:05:34.760    CC module/event/subsystems/vmd/vmd.o
00:05:34.760    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:05:34.760    CC module/event/subsystems/vmd/vmd_rpc.o
00:05:34.760    CC module/event/subsystems/keyring/keyring.o
00:05:34.760    LIB libspdk_event_keyring.a
00:05:34.760    LIB libspdk_event_vfu_tgt.a
00:05:34.760    LIB libspdk_event_scheduler.a
00:05:34.760    LIB libspdk_event_vhost_blk.a
00:05:34.760    LIB libspdk_event_sock.a
00:05:34.760    LIB libspdk_event_vmd.a
00:05:34.760    LIB libspdk_event_iobuf.a
00:05:34.760    SO libspdk_event_scheduler.so.4.0
00:05:34.760    SO libspdk_event_vhost_blk.so.3.0
00:05:34.760    SO libspdk_event_keyring.so.1.0
00:05:34.760    SO libspdk_event_vfu_tgt.so.3.0
00:05:34.760    SO libspdk_event_sock.so.5.0
00:05:34.761    LIB libspdk_event_fsdev.a
00:05:34.761    SO libspdk_event_vmd.so.6.0
00:05:34.761    SO libspdk_event_iobuf.so.3.0
00:05:34.761    SO libspdk_event_fsdev.so.1.0
00:05:34.761    SYMLINK libspdk_event_vhost_blk.so
00:05:34.761    SYMLINK libspdk_event_vfu_tgt.so
00:05:34.761    SYMLINK libspdk_event_sock.so
00:05:34.761    SYMLINK libspdk_event_scheduler.so
00:05:34.761    SYMLINK libspdk_event_vmd.so
00:05:34.761    SYMLINK libspdk_event_keyring.so
00:05:34.761    SYMLINK libspdk_event_fsdev.so
00:05:34.761    SYMLINK libspdk_event_iobuf.so
00:05:35.019    CC module/event/subsystems/accel/accel.o
00:05:35.019    LIB libspdk_event_accel.a
00:05:35.019    SO libspdk_event_accel.so.6.0
00:05:35.278    SYMLINK libspdk_event_accel.so
00:05:35.278    CC module/event/subsystems/bdev/bdev.o
00:05:35.536    LIB libspdk_event_bdev.a
00:05:35.536    SO libspdk_event_bdev.so.6.0
00:05:35.536    SYMLINK libspdk_event_bdev.so
00:05:35.536    CC module/event/subsystems/ublk/ublk.o
00:05:35.537    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:05:35.537    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:05:35.795    CC module/event/subsystems/nbd/nbd.o
00:05:35.795    CC module/event/subsystems/scsi/scsi.o
00:05:35.795    LIB libspdk_event_ublk.a
00:05:35.795    LIB libspdk_event_nbd.a
00:05:35.795    SO libspdk_event_ublk.so.3.0
00:05:35.795    LIB libspdk_event_scsi.a
00:05:35.795    SO libspdk_event_nbd.so.6.0
00:05:35.795    SO libspdk_event_scsi.so.6.0
00:05:35.795    SYMLINK libspdk_event_ublk.so
00:05:35.795    SYMLINK libspdk_event_nbd.so
00:05:35.795    LIB libspdk_event_nvmf.a
00:05:35.795    SYMLINK libspdk_event_scsi.so
00:05:35.795    SO libspdk_event_nvmf.so.6.0
00:05:36.054    SYMLINK libspdk_event_nvmf.so
00:05:36.054    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:05:36.054    CC module/event/subsystems/iscsi/iscsi.o
00:05:36.054    LIB libspdk_event_vhost_scsi.a
00:05:36.054    LIB libspdk_event_iscsi.a
00:05:36.054    SO libspdk_event_vhost_scsi.so.3.0
00:05:36.312    SO libspdk_event_iscsi.so.6.0
00:05:36.312    SYMLINK libspdk_event_vhost_scsi.so
00:05:36.312    SYMLINK libspdk_event_iscsi.so
00:05:36.312    SO libspdk.so.6.0
00:05:36.312    SYMLINK libspdk.so
00:05:36.579    CXX app/trace/trace.o
00:05:36.579    CC app/trace_record/trace_record.o
00:05:36.579    CC app/spdk_top/spdk_top.o
00:05:36.579    CC app/spdk_nvme_identify/identify.o
00:05:36.579    CC app/spdk_lspci/spdk_lspci.o
00:05:36.579    CC app/spdk_nvme_perf/perf.o
00:05:36.579    CC app/spdk_nvme_discover/discovery_aer.o
00:05:36.579    TEST_HEADER include/spdk/accel.h
00:05:36.579    TEST_HEADER include/spdk/accel_module.h
00:05:36.579    TEST_HEADER include/spdk/assert.h
00:05:36.579    TEST_HEADER include/spdk/barrier.h
00:05:36.579    TEST_HEADER include/spdk/base64.h
00:05:36.579    TEST_HEADER include/spdk/bdev.h
00:05:36.579    TEST_HEADER include/spdk/bdev_module.h
00:05:36.579    TEST_HEADER include/spdk/bdev_zone.h
00:05:36.579    CC test/rpc_client/rpc_client_test.o
00:05:36.579    TEST_HEADER include/spdk/bit_array.h
00:05:36.579    TEST_HEADER include/spdk/bit_pool.h
00:05:36.579    TEST_HEADER include/spdk/blob_bdev.h
00:05:36.579    TEST_HEADER include/spdk/blobfs_bdev.h
00:05:36.579    TEST_HEADER include/spdk/blobfs.h
00:05:36.579    TEST_HEADER include/spdk/blob.h
00:05:36.579    TEST_HEADER include/spdk/conf.h
00:05:36.579    TEST_HEADER include/spdk/config.h
00:05:36.579    TEST_HEADER include/spdk/cpuset.h
00:05:36.579    TEST_HEADER include/spdk/crc16.h
00:05:36.579    TEST_HEADER include/spdk/crc32.h
00:05:36.579    TEST_HEADER include/spdk/crc64.h
00:05:36.579    TEST_HEADER include/spdk/dif.h
00:05:36.579    TEST_HEADER include/spdk/dma.h
00:05:36.579    TEST_HEADER include/spdk/endian.h
00:05:36.579    TEST_HEADER include/spdk/env_dpdk.h
00:05:36.579    TEST_HEADER include/spdk/env.h
00:05:36.579    TEST_HEADER include/spdk/event.h
00:05:36.579    CC examples/interrupt_tgt/interrupt_tgt.o
00:05:36.579    TEST_HEADER include/spdk/fd_group.h
00:05:36.579    TEST_HEADER include/spdk/fd.h
00:05:36.579    TEST_HEADER include/spdk/file.h
00:05:36.579    TEST_HEADER include/spdk/fsdev.h
00:05:36.579    TEST_HEADER include/spdk/fsdev_module.h
00:05:36.579    TEST_HEADER include/spdk/ftl.h
00:05:36.579    TEST_HEADER include/spdk/fuse_dispatcher.h
00:05:36.579    TEST_HEADER include/spdk/gpt_spec.h
00:05:36.579    TEST_HEADER include/spdk/hexlify.h
00:05:36.579    TEST_HEADER include/spdk/histogram_data.h
00:05:36.579    TEST_HEADER include/spdk/idxd.h
00:05:36.579    TEST_HEADER include/spdk/idxd_spec.h
00:05:36.579    CC app/spdk_dd/spdk_dd.o
00:05:36.579    TEST_HEADER include/spdk/init.h
00:05:36.579    TEST_HEADER include/spdk/ioat.h
00:05:36.579    TEST_HEADER include/spdk/ioat_spec.h
00:05:36.579    TEST_HEADER include/spdk/iscsi_spec.h
00:05:36.579    TEST_HEADER include/spdk/json.h
00:05:36.579    TEST_HEADER include/spdk/jsonrpc.h
00:05:36.579    TEST_HEADER include/spdk/keyring.h
00:05:36.579    TEST_HEADER include/spdk/keyring_module.h
00:05:36.579    TEST_HEADER include/spdk/likely.h
00:05:36.579    TEST_HEADER include/spdk/log.h
00:05:36.579    TEST_HEADER include/spdk/lvol.h
00:05:36.579    CC app/nvmf_tgt/nvmf_main.o
00:05:36.579    TEST_HEADER include/spdk/md5.h
00:05:36.579    TEST_HEADER include/spdk/mmio.h
00:05:36.579    TEST_HEADER include/spdk/memory.h
00:05:36.579    TEST_HEADER include/spdk/nbd.h
00:05:36.579    TEST_HEADER include/spdk/net.h
00:05:36.579    CC app/iscsi_tgt/iscsi_tgt.o
00:05:36.579    TEST_HEADER include/spdk/notify.h
00:05:36.579    TEST_HEADER include/spdk/nvme.h
00:05:36.579    TEST_HEADER include/spdk/nvme_intel.h
00:05:36.579    TEST_HEADER include/spdk/nvme_ocssd.h
00:05:36.579    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:05:36.579    TEST_HEADER include/spdk/nvme_spec.h
00:05:36.579    TEST_HEADER include/spdk/nvme_zns.h
00:05:36.579    TEST_HEADER include/spdk/nvmf_cmd.h
00:05:36.579    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:05:36.579    TEST_HEADER include/spdk/nvmf.h
00:05:36.579    TEST_HEADER include/spdk/nvmf_spec.h
00:05:36.579    TEST_HEADER include/spdk/nvmf_transport.h
00:05:36.579    TEST_HEADER include/spdk/opal.h
00:05:36.579    TEST_HEADER include/spdk/opal_spec.h
00:05:36.579    TEST_HEADER include/spdk/pci_ids.h
00:05:36.579    TEST_HEADER include/spdk/pipe.h
00:05:36.579    TEST_HEADER include/spdk/queue.h
00:05:36.579    TEST_HEADER include/spdk/reduce.h
00:05:36.579    TEST_HEADER include/spdk/rpc.h
00:05:36.579    TEST_HEADER include/spdk/scheduler.h
00:05:36.579    TEST_HEADER include/spdk/scsi.h
00:05:36.579    TEST_HEADER include/spdk/scsi_spec.h
00:05:36.579    TEST_HEADER include/spdk/sock.h
00:05:36.579    TEST_HEADER include/spdk/stdinc.h
00:05:36.579    TEST_HEADER include/spdk/string.h
00:05:36.579    TEST_HEADER include/spdk/thread.h
00:05:36.579    TEST_HEADER include/spdk/trace.h
00:05:36.579    TEST_HEADER include/spdk/trace_parser.h
00:05:36.579    TEST_HEADER include/spdk/tree.h
00:05:36.579    TEST_HEADER include/spdk/ublk.h
00:05:36.579    TEST_HEADER include/spdk/util.h
00:05:36.579    TEST_HEADER include/spdk/uuid.h
00:05:36.579    TEST_HEADER include/spdk/version.h
00:05:36.579    TEST_HEADER include/spdk/vfio_user_pci.h
00:05:36.579    TEST_HEADER include/spdk/vfio_user_spec.h
00:05:36.579    TEST_HEADER include/spdk/vhost.h
00:05:36.579    TEST_HEADER include/spdk/vmd.h
00:05:36.579    TEST_HEADER include/spdk/xor.h
00:05:36.579    TEST_HEADER include/spdk/zipf.h
00:05:36.579    CXX test/cpp_headers/accel.o
00:05:36.579    CXX test/cpp_headers/accel_module.o
00:05:36.579    CXX test/cpp_headers/assert.o
00:05:36.579    CXX test/cpp_headers/barrier.o
00:05:36.579    CXX test/cpp_headers/base64.o
00:05:36.579    CXX test/cpp_headers/bdev.o
00:05:36.579    CXX test/cpp_headers/bdev_module.o
00:05:36.579    CXX test/cpp_headers/bdev_zone.o
00:05:36.579    CXX test/cpp_headers/bit_array.o
00:05:36.579    CXX test/cpp_headers/bit_pool.o
00:05:36.579    CXX test/cpp_headers/blob_bdev.o
00:05:36.579    CXX test/cpp_headers/blobfs_bdev.o
00:05:36.579    CXX test/cpp_headers/blobfs.o
00:05:36.579    CXX test/cpp_headers/blob.o
00:05:36.579    CXX test/cpp_headers/conf.o
00:05:36.579    CXX test/cpp_headers/config.o
00:05:36.579    CXX test/cpp_headers/crc16.o
00:05:36.579    CXX test/cpp_headers/cpuset.o
00:05:36.579    CXX test/cpp_headers/crc32.o
00:05:36.579    CXX test/cpp_headers/dif.o
00:05:36.579    CXX test/cpp_headers/crc64.o
00:05:36.579    CC app/spdk_tgt/spdk_tgt.o
00:05:36.579    CXX test/cpp_headers/dma.o
00:05:36.579    CXX test/cpp_headers/endian.o
00:05:36.579    CXX test/cpp_headers/env_dpdk.o
00:05:36.579    CXX test/cpp_headers/env.o
00:05:36.579    CXX test/cpp_headers/event.o
00:05:36.579    CXX test/cpp_headers/fd_group.o
00:05:36.579    CXX test/cpp_headers/file.o
00:05:36.579    CXX test/cpp_headers/fd.o
00:05:36.579    CXX test/cpp_headers/fsdev.o
00:05:36.579    CXX test/cpp_headers/fsdev_module.o
00:05:36.579    CXX test/cpp_headers/ftl.o
00:05:36.579    CXX test/cpp_headers/fuse_dispatcher.o
00:05:36.579    CXX test/cpp_headers/gpt_spec.o
00:05:36.579    CXX test/cpp_headers/hexlify.o
00:05:36.579    CXX test/cpp_headers/histogram_data.o
00:05:36.579    CXX test/cpp_headers/idxd.o
00:05:36.579    CXX test/cpp_headers/idxd_spec.o
00:05:36.579    CXX test/cpp_headers/init.o
00:05:36.579    CXX test/cpp_headers/ioat.o
00:05:36.579    CXX test/cpp_headers/ioat_spec.o
00:05:36.579    CXX test/cpp_headers/iscsi_spec.o
00:05:36.579    CXX test/cpp_headers/json.o
00:05:36.579    CXX test/cpp_headers/jsonrpc.o
00:05:36.579    CXX test/cpp_headers/keyring.o
00:05:36.579    CXX test/cpp_headers/keyring_module.o
00:05:36.579    CXX test/cpp_headers/log.o
00:05:36.579    CXX test/cpp_headers/likely.o
00:05:36.579    CXX test/cpp_headers/lvol.o
00:05:36.579    CXX test/cpp_headers/md5.o
00:05:36.579    CXX test/cpp_headers/mmio.o
00:05:36.579    CXX test/cpp_headers/memory.o
00:05:36.579    CXX test/cpp_headers/nbd.o
00:05:36.579    CXX test/cpp_headers/net.o
00:05:36.579    CXX test/cpp_headers/nvme.o
00:05:36.579    CXX test/cpp_headers/notify.o
00:05:36.579    CXX test/cpp_headers/nvme_intel.o
00:05:36.579    CXX test/cpp_headers/nvme_ocssd.o
00:05:36.579    CC examples/util/zipf/zipf.o
00:05:36.579    CC examples/ioat/perf/perf.o
00:05:36.579    CC examples/ioat/verify/verify.o
00:05:36.579    CC app/fio/nvme/fio_plugin.o
00:05:36.579    CXX test/cpp_headers/nvme_ocssd_spec.o
00:05:36.579    CXX test/cpp_headers/nvme_spec.o
00:05:36.579    CC test/app/stub/stub.o
00:05:36.579    CC test/thread/poller_perf/poller_perf.o
00:05:36.844    CC app/fio/bdev/fio_plugin.o
00:05:36.844    CC test/app/histogram_perf/histogram_perf.o
00:05:36.844    CC test/env/memory/memory_ut.o
00:05:36.844    CC test/dma/test_dma/test_dma.o
00:05:36.844    CC test/app/jsoncat/jsoncat.o
00:05:36.844    CC test/env/vtophys/vtophys.o
00:05:36.844    CC test/env/pci/pci_ut.o
00:05:36.844    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:05:36.844    CC test/app/bdev_svc/bdev_svc.o
00:05:36.844    LINK spdk_lspci
00:05:37.106    LINK rpc_client_test
00:05:37.106    CC test/env/mem_callbacks/mem_callbacks.o
00:05:37.106    LINK spdk_nvme_discover
00:05:37.106    LINK interrupt_tgt
00:05:37.106    LINK nvmf_tgt
00:05:37.106    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:05:37.365    LINK iscsi_tgt
00:05:37.365    LINK zipf
00:05:37.365    LINK spdk_trace_record
00:05:37.365    LINK jsoncat
00:05:37.365    LINK histogram_perf
00:05:37.366    LINK poller_perf
00:05:37.366    CXX test/cpp_headers/nvme_zns.o
00:05:37.366    CXX test/cpp_headers/nvmf_cmd.o
00:05:37.366    CXX test/cpp_headers/nvmf_fc_spec.o
00:05:37.366    CXX test/cpp_headers/nvmf.o
00:05:37.366    CXX test/cpp_headers/nvmf_spec.o
00:05:37.366    LINK vtophys
00:05:37.366    CXX test/cpp_headers/nvmf_transport.o
00:05:37.366    CXX test/cpp_headers/opal.o
00:05:37.366    CXX test/cpp_headers/pci_ids.o
00:05:37.366    CXX test/cpp_headers/opal_spec.o
00:05:37.366    LINK spdk_tgt
00:05:37.366    CXX test/cpp_headers/pipe.o
00:05:37.366    CXX test/cpp_headers/queue.o
00:05:37.366    CXX test/cpp_headers/reduce.o
00:05:37.366    CXX test/cpp_headers/rpc.o
00:05:37.366    CXX test/cpp_headers/scheduler.o
00:05:37.366    CXX test/cpp_headers/scsi.o
00:05:37.366    LINK stub
00:05:37.366    CXX test/cpp_headers/scsi_spec.o
00:05:37.366    CXX test/cpp_headers/sock.o
00:05:37.366    CXX test/cpp_headers/stdinc.o
00:05:37.366    LINK env_dpdk_post_init
00:05:37.366    CXX test/cpp_headers/string.o
00:05:37.366    CXX test/cpp_headers/trace.o
00:05:37.366    CXX test/cpp_headers/thread.o
00:05:37.366    CXX test/cpp_headers/trace_parser.o
00:05:37.366    CXX test/cpp_headers/tree.o
00:05:37.366    CXX test/cpp_headers/ublk.o
00:05:37.366    CXX test/cpp_headers/util.o
00:05:37.366    CXX test/cpp_headers/uuid.o
00:05:37.366    CXX test/cpp_headers/version.o
00:05:37.366    CXX test/cpp_headers/vfio_user_pci.o
00:05:37.366    CXX test/cpp_headers/vfio_user_spec.o
00:05:37.366    CXX test/cpp_headers/vhost.o
00:05:37.366    CXX test/cpp_headers/vmd.o
00:05:37.366    CXX test/cpp_headers/xor.o
00:05:37.366    CXX test/cpp_headers/zipf.o
00:05:37.366    LINK verify
00:05:37.366    LINK ioat_perf
00:05:37.366    LINK bdev_svc
00:05:37.625    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:05:37.625    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:05:37.625    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:05:37.625    LINK spdk_dd
00:05:37.625    LINK spdk_trace
00:05:37.885    CC examples/vmd/lsvmd/lsvmd.o
00:05:37.885    CC examples/vmd/led/led.o
00:05:37.885    CC examples/sock/hello_world/hello_sock.o
00:05:37.885    CC examples/idxd/perf/perf.o
00:05:37.885    LINK spdk_bdev
00:05:37.885    LINK pci_ut
00:05:37.885    CC test/event/event_perf/event_perf.o
00:05:37.885    CC test/event/reactor/reactor.o
00:05:37.885    CC test/event/reactor_perf/reactor_perf.o
00:05:37.885    CC test/event/app_repeat/app_repeat.o
00:05:37.885    CC examples/thread/thread/thread_ex.o
00:05:37.885    CC test/event/scheduler/scheduler.o
00:05:37.885    LINK test_dma
00:05:37.885    CC app/vhost/vhost.o
00:05:37.885    LINK mem_callbacks
00:05:37.885    LINK lsvmd
00:05:37.885    LINK reactor
00:05:37.885    LINK reactor_perf
00:05:37.885    LINK event_perf
00:05:37.885    LINK app_repeat
00:05:37.885    LINK led
00:05:37.885    LINK spdk_nvme
00:05:38.144    LINK spdk_nvme_identify
00:05:38.144    LINK nvme_fuzz
00:05:38.144    LINK hello_sock
00:05:38.144    LINK thread
00:05:38.144    LINK scheduler
00:05:38.144    LINK vhost
00:05:38.144    LINK spdk_top
00:05:38.144    LINK vhost_fuzz
00:05:38.144    LINK idxd_perf
00:05:38.144    LINK spdk_nvme_perf
00:05:38.403    CC test/nvme/aer/aer.o
00:05:38.403    CC test/nvme/sgl/sgl.o
00:05:38.403    CC test/nvme/reset/reset.o
00:05:38.403    CC test/nvme/err_injection/err_injection.o
00:05:38.403    CC test/nvme/reserve/reserve.o
00:05:38.403    CC test/nvme/boot_partition/boot_partition.o
00:05:38.403    CC test/nvme/startup/startup.o
00:05:38.403    CC test/nvme/compliance/nvme_compliance.o
00:05:38.403    CC test/nvme/connect_stress/connect_stress.o
00:05:38.403    CC test/nvme/overhead/overhead.o
00:05:38.403    CC test/nvme/e2edp/nvme_dp.o
00:05:38.403    CC test/nvme/fused_ordering/fused_ordering.o
00:05:38.403    CC test/nvme/doorbell_aers/doorbell_aers.o
00:05:38.403    CC test/nvme/simple_copy/simple_copy.o
00:05:38.403    CC test/nvme/cuse/cuse.o
00:05:38.403    CC test/nvme/fdp/fdp.o
00:05:38.403    CC test/blobfs/mkfs/mkfs.o
00:05:38.403    CC test/accel/dif/dif.o
00:05:38.403    CC test/lvol/esnap/esnap.o
00:05:38.403    CC examples/nvme/reconnect/reconnect.o
00:05:38.403    CC examples/nvme/nvme_manage/nvme_manage.o
00:05:38.403    CC examples/nvme/hello_world/hello_world.o
00:05:38.403    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:05:38.403    CC examples/nvme/abort/abort.o
00:05:38.403    CC examples/nvme/cmb_copy/cmb_copy.o
00:05:38.403    CC examples/nvme/arbitration/arbitration.o
00:05:38.403    CC examples/nvme/hotplug/hotplug.o
00:05:38.403    LINK boot_partition
00:05:38.403    CC examples/accel/perf/accel_perf.o
00:05:38.661    LINK startup
00:05:38.661    LINK err_injection
00:05:38.661    CC examples/blob/hello_world/hello_blob.o
00:05:38.661    CC examples/blob/cli/blobcli.o
00:05:38.661    CC examples/fsdev/hello_world/hello_fsdev.o
00:05:38.661    LINK doorbell_aers
00:05:38.661    LINK connect_stress
00:05:38.661    LINK fused_ordering
00:05:38.661    LINK reserve
00:05:38.661    LINK mkfs
00:05:38.662    LINK memory_ut
00:05:38.662    LINK simple_copy
00:05:38.662    LINK reset
00:05:38.662    LINK sgl
00:05:38.662    LINK pmr_persistence
00:05:38.662    LINK aer
00:05:38.662    LINK cmb_copy
00:05:38.662    LINK nvme_dp
00:05:38.662    LINK overhead
00:05:38.662    LINK hello_world
00:05:38.662    LINK hotplug
00:05:38.662    LINK nvme_compliance
00:05:38.662    LINK fdp
00:05:38.921    LINK arbitration
00:05:38.921    LINK hello_blob
00:05:38.921    LINK hello_fsdev
00:05:38.921    LINK reconnect
00:05:38.921    LINK abort
00:05:39.180    LINK nvme_manage
00:05:39.180    LINK accel_perf
00:05:39.180    LINK blobcli
00:05:39.180    LINK dif
00:05:39.439    CC examples/bdev/hello_world/hello_bdev.o
00:05:39.439    CC examples/bdev/bdevperf/bdevperf.o
00:05:39.439    CC test/bdev/bdevio/bdevio.o
00:05:39.699    LINK iscsi_fuzz
00:05:39.699    LINK hello_bdev
00:05:39.699    LINK cuse
00:05:39.958    LINK bdevio
00:05:40.216    LINK bdevperf
00:05:40.475    CC examples/nvmf/nvmf/nvmf.o
00:05:40.734    LINK nvmf
00:05:44.022    LINK esnap
00:05:44.282  
00:05:44.282  real	1m3.219s
00:05:44.282  user	15m26.003s
00:05:44.282  sys	2m55.272s
00:05:44.282   18:26:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:44.282   18:26:30 make -- common/autotest_common.sh@10 -- $ set +x
00:05:44.282  ************************************
00:05:44.282  END TEST make
00:05:44.282  ************************************
00:05:44.282   18:26:30  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:05:44.282   18:26:30  -- pm/common@29 -- $ signal_monitor_resources TERM
00:05:44.282   18:26:30  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:05:44.282   18:26:30  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.282   18:26:30  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:05:44.282   18:26:30  -- pm/common@44 -- $ pid=265987
00:05:44.282   18:26:30  -- pm/common@50 -- $ kill -TERM 265987
00:05:44.282   18:26:30  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.282   18:26:30  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:05:44.282   18:26:30  -- pm/common@44 -- $ pid=265988
00:05:44.282   18:26:30  -- pm/common@50 -- $ kill -TERM 265988
00:05:44.282   18:26:30  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.282   18:26:30  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:05:44.282   18:26:30  -- pm/common@44 -- $ pid=265991
00:05:44.282   18:26:30  -- pm/common@50 -- $ kill -TERM 265991
00:05:44.282   18:26:30  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.282   18:26:30  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:05:44.282   18:26:30  -- pm/common@44 -- $ pid=266015
00:05:44.282   18:26:30  -- pm/common@50 -- $ sudo -E kill -TERM 266015
00:05:44.282   18:26:30  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:05:44.282   18:26:30  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:05:44.282    18:26:30  -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:44.282     18:26:30  -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:44.282     18:26:30  -- common/autotest_common.sh@1693 -- # lcov --version
00:05:44.282    18:26:30  -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:44.282    18:26:30  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:44.282    18:26:30  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:44.282    18:26:30  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:44.282    18:26:30  -- scripts/common.sh@336 -- # IFS=.-:
00:05:44.282    18:26:30  -- scripts/common.sh@336 -- # read -ra ver1
00:05:44.282    18:26:30  -- scripts/common.sh@337 -- # IFS=.-:
00:05:44.282    18:26:30  -- scripts/common.sh@337 -- # read -ra ver2
00:05:44.282    18:26:30  -- scripts/common.sh@338 -- # local 'op=<'
00:05:44.282    18:26:30  -- scripts/common.sh@340 -- # ver1_l=2
00:05:44.282    18:26:30  -- scripts/common.sh@341 -- # ver2_l=1
00:05:44.282    18:26:30  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:44.282    18:26:30  -- scripts/common.sh@344 -- # case "$op" in
00:05:44.282    18:26:30  -- scripts/common.sh@345 -- # : 1
00:05:44.282    18:26:30  -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:44.282    18:26:30  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:44.282     18:26:30  -- scripts/common.sh@365 -- # decimal 1
00:05:44.282     18:26:30  -- scripts/common.sh@353 -- # local d=1
00:05:44.282     18:26:30  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:44.282     18:26:30  -- scripts/common.sh@355 -- # echo 1
00:05:44.282    18:26:30  -- scripts/common.sh@365 -- # ver1[v]=1
00:05:44.282     18:26:30  -- scripts/common.sh@366 -- # decimal 2
00:05:44.282     18:26:30  -- scripts/common.sh@353 -- # local d=2
00:05:44.282     18:26:30  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:44.282     18:26:30  -- scripts/common.sh@355 -- # echo 2
00:05:44.282    18:26:30  -- scripts/common.sh@366 -- # ver2[v]=2
00:05:44.282    18:26:30  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:44.282    18:26:30  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:44.282    18:26:30  -- scripts/common.sh@368 -- # return 0
00:05:44.282    18:26:30  -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:44.282    18:26:30  -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:44.282  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.282  		--rc genhtml_branch_coverage=1
00:05:44.282  		--rc genhtml_function_coverage=1
00:05:44.282  		--rc genhtml_legend=1
00:05:44.282  		--rc geninfo_all_blocks=1
00:05:44.282  		--rc geninfo_unexecuted_blocks=1
00:05:44.282  		
00:05:44.282  		'
00:05:44.282    18:26:30  -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:44.282  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.282  		--rc genhtml_branch_coverage=1
00:05:44.282  		--rc genhtml_function_coverage=1
00:05:44.282  		--rc genhtml_legend=1
00:05:44.282  		--rc geninfo_all_blocks=1
00:05:44.282  		--rc geninfo_unexecuted_blocks=1
00:05:44.282  		
00:05:44.282  		'
00:05:44.282    18:26:30  -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:44.282  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.282  		--rc genhtml_branch_coverage=1
00:05:44.282  		--rc genhtml_function_coverage=1
00:05:44.282  		--rc genhtml_legend=1
00:05:44.282  		--rc geninfo_all_blocks=1
00:05:44.282  		--rc geninfo_unexecuted_blocks=1
00:05:44.282  		
00:05:44.282  		'
00:05:44.282    18:26:30  -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:44.282  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.282  		--rc genhtml_branch_coverage=1
00:05:44.282  		--rc genhtml_function_coverage=1
00:05:44.282  		--rc genhtml_legend=1
00:05:44.282  		--rc geninfo_all_blocks=1
00:05:44.282  		--rc geninfo_unexecuted_blocks=1
00:05:44.282  		
00:05:44.282  		'
00:05:44.282   18:26:30  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:05:44.282     18:26:30  -- nvmf/common.sh@7 -- # uname -s
00:05:44.282    18:26:30  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:44.282    18:26:30  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:44.282    18:26:30  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:44.282    18:26:30  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:44.282    18:26:30  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:44.282    18:26:30  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:44.282    18:26:30  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:44.282    18:26:30  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:44.282    18:26:30  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:44.282     18:26:30  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:44.282    18:26:30  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:05:44.282    18:26:30  -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:05:44.282    18:26:30  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:44.282    18:26:30  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:44.282    18:26:30  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:44.282    18:26:30  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:44.282    18:26:30  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:05:44.283     18:26:30  -- scripts/common.sh@15 -- # shopt -s extglob
00:05:44.283     18:26:30  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:44.283     18:26:30  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:44.283     18:26:30  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:44.283      18:26:30  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.283      18:26:30  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.283      18:26:30  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.283      18:26:30  -- paths/export.sh@5 -- # export PATH
00:05:44.283      18:26:30  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.283    18:26:30  -- nvmf/common.sh@51 -- # : 0
00:05:44.283    18:26:30  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:44.283    18:26:30  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:44.283    18:26:30  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:44.283    18:26:30  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:44.283    18:26:30  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:44.283    18:26:30  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:44.283  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:44.283    18:26:30  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:44.283    18:26:30  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:44.283    18:26:30  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:44.283   18:26:30  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:05:44.283    18:26:30  -- spdk/autotest.sh@32 -- # uname -s
00:05:44.283   18:26:30  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:05:44.283   18:26:30  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:05:44.283   18:26:30  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:05:44.283   18:26:30  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:05:44.283   18:26:30  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:05:44.283   18:26:30  -- spdk/autotest.sh@44 -- # modprobe nbd
00:05:44.283    18:26:30  -- spdk/autotest.sh@46 -- # type -P udevadm
00:05:44.283   18:26:30  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:05:44.283   18:26:30  -- spdk/autotest.sh@48 -- # udevadm_pid=366250
00:05:44.283   18:26:30  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:05:44.283   18:26:30  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:05:44.283   18:26:30  -- pm/common@17 -- # local monitor
00:05:44.283   18:26:30  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.283   18:26:30  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.283    18:26:30  -- pm/common@21 -- # date +%s
00:05:44.283   18:26:30  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.283   18:26:30  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:44.283    18:26:30  -- pm/common@21 -- # date +%s
00:05:44.283    18:26:30  -- pm/common@21 -- # date +%s
00:05:44.283   18:26:30  -- pm/common@25 -- # sleep 1
00:05:44.283   18:26:30  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864390
00:05:44.283    18:26:30  -- pm/common@21 -- # date +%s
00:05:44.283   18:26:30  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864390
00:05:44.283   18:26:30  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864390
00:05:44.283   18:26:30  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864390
00:05:44.542  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864390_collect-cpu-load.pm.log
00:05:44.542  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864390_collect-cpu-temp.pm.log
00:05:44.542  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864390_collect-vmstat.pm.log
00:05:44.542  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864390_collect-bmc-pm.bmc.pm.log
00:05:45.481   18:26:31  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:05:45.481   18:26:31  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:05:45.481   18:26:31  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:45.481   18:26:31  -- common/autotest_common.sh@10 -- # set +x
00:05:45.481   18:26:31  -- spdk/autotest.sh@59 -- # create_test_list
00:05:45.481   18:26:31  -- common/autotest_common.sh@752 -- # xtrace_disable
00:05:45.481   18:26:31  -- common/autotest_common.sh@10 -- # set +x
00:05:45.481     18:26:31  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh
00:05:45.481    18:26:31  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:05:45.481   18:26:31  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:05:45.481   18:26:31  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:05:45.481   18:26:31  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:05:45.481   18:26:31  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:05:45.481    18:26:31  -- common/autotest_common.sh@1457 -- # uname
00:05:45.481   18:26:31  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:05:45.481   18:26:31  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:05:45.481    18:26:31  -- common/autotest_common.sh@1477 -- # uname
00:05:45.481   18:26:31  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:05:45.481   18:26:31  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:05:45.482   18:26:31  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:05:45.482  lcov: LCOV version 1.15
00:05:45.482   18:26:31  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info
00:06:03.571  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:06:03.571  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:06:08.842   18:26:55  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:06:08.842   18:26:55  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:08.842   18:26:55  -- common/autotest_common.sh@10 -- # set +x
00:06:08.842   18:26:55  -- spdk/autotest.sh@78 -- # rm -f
00:06:08.842   18:26:55  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:06:10.219  0000:00:04.7 (8086 6f27): Already using the ioatdma driver
00:06:10.219  0000:00:04.6 (8086 6f26): Already using the ioatdma driver
00:06:10.219  0000:00:04.5 (8086 6f25): Already using the ioatdma driver
00:06:10.220  0000:00:04.4 (8086 6f24): Already using the ioatdma driver
00:06:10.220  0000:00:04.3 (8086 6f23): Already using the ioatdma driver
00:06:10.220  0000:00:04.2 (8086 6f22): Already using the ioatdma driver
00:06:10.220  0000:00:04.1 (8086 6f21): Already using the ioatdma driver
00:06:10.220  0000:00:04.0 (8086 6f20): Already using the ioatdma driver
00:06:10.220  0000:80:04.7 (8086 6f27): Already using the ioatdma driver
00:06:10.220  0000:80:04.6 (8086 6f26): Already using the ioatdma driver
00:06:10.220  0000:80:04.5 (8086 6f25): Already using the ioatdma driver
00:06:10.220  0000:80:04.4 (8086 6f24): Already using the ioatdma driver
00:06:10.220  0000:80:04.3 (8086 6f23): Already using the ioatdma driver
00:06:10.220  0000:80:04.2 (8086 6f22): Already using the ioatdma driver
00:06:10.220  0000:80:04.1 (8086 6f21): Already using the ioatdma driver
00:06:10.220  0000:80:04.0 (8086 6f20): Already using the ioatdma driver
00:06:10.220  0000:0d:00.0 (8086 0a54): Already using the nvme driver
00:06:10.220   18:26:56  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:06:10.220   18:26:56  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:06:10.220   18:26:56  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:06:10.220   18:26:56  -- common/autotest_common.sh@1658 -- # local nvme bdf
00:06:10.220   18:26:56  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:10.220   18:26:56  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:06:10.220   18:26:56  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:06:10.220   18:26:56  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:10.220   18:26:56  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:10.220   18:26:56  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:06:10.220   18:26:56  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:10.220   18:26:56  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:10.220   18:26:56  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:06:10.220   18:26:56  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:06:10.220   18:26:56  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:06:10.220  No valid GPT data, bailing
00:06:10.220    18:26:56  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:06:10.220   18:26:56  -- scripts/common.sh@394 -- # pt=
00:06:10.220   18:26:56  -- scripts/common.sh@395 -- # return 1
00:06:10.220   18:26:56  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:06:10.220  1+0 records in
00:06:10.220  1+0 records out
00:06:10.220  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00216061 s, 485 MB/s
00:06:10.220   18:26:56  -- spdk/autotest.sh@105 -- # sync
00:06:10.480   18:26:56  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:06:10.480   18:26:56  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:06:10.480    18:26:56  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:06:13.017    18:26:59  -- spdk/autotest.sh@111 -- # uname -s
00:06:13.017   18:26:59  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:06:13.017   18:26:59  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:06:13.017   18:26:59  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:06:13.954  Hugepages
00:06:13.954  node     hugesize     free /  total
00:06:13.954  node0   1048576kB        0 /      0
00:06:13.954  node0      2048kB        0 /      0
00:06:13.954  node1   1048576kB        0 /      0
00:06:13.954  node1      2048kB        0 /      0
00:06:13.954  
00:06:13.954  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:13.954  I/OAT                     0000:00:04.0    8086   6f20   0       ioatdma          -          -
00:06:13.954  I/OAT                     0000:00:04.1    8086   6f21   0       ioatdma          -          -
00:06:13.954  I/OAT                     0000:00:04.2    8086   6f22   0       ioatdma          -          -
00:06:13.954  I/OAT                     0000:00:04.3    8086   6f23   0       ioatdma          -          -
00:06:13.954  I/OAT                     0000:00:04.4    8086   6f24   0       ioatdma          -          -
00:06:13.954  I/OAT                     0000:00:04.5    8086   6f25   0       ioatdma          -          -
00:06:13.954  I/OAT                     0000:00:04.6    8086   6f26   0       ioatdma          -          -
00:06:13.954  I/OAT                     0000:00:04.7    8086   6f27   0       ioatdma          -          -
00:06:13.954  NVMe                      0000:0d:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:06:13.954  I/OAT                     0000:80:04.0    8086   6f20   1       ioatdma          -          -
00:06:13.954  I/OAT                     0000:80:04.1    8086   6f21   1       ioatdma          -          -
00:06:13.954  I/OAT                     0000:80:04.2    8086   6f22   1       ioatdma          -          -
00:06:13.954  I/OAT                     0000:80:04.3    8086   6f23   1       ioatdma          -          -
00:06:13.954  I/OAT                     0000:80:04.4    8086   6f24   1       ioatdma          -          -
00:06:13.954  I/OAT                     0000:80:04.5    8086   6f25   1       ioatdma          -          -
00:06:13.954  I/OAT                     0000:80:04.6    8086   6f26   1       ioatdma          -          -
00:06:13.954  I/OAT                     0000:80:04.7    8086   6f27   1       ioatdma          -          -
00:06:13.954    18:27:00  -- spdk/autotest.sh@117 -- # uname -s
00:06:13.954   18:27:00  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:06:13.954   18:27:00  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:06:13.954   18:27:00  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:06:14.892  0000:00:04.7 (8086 6f27): ioatdma -> vfio-pci
00:06:14.892  0000:00:04.6 (8086 6f26): ioatdma -> vfio-pci
00:06:14.892  0000:00:04.5 (8086 6f25): ioatdma -> vfio-pci
00:06:14.892  0000:00:04.4 (8086 6f24): ioatdma -> vfio-pci
00:06:15.151  0000:00:04.3 (8086 6f23): ioatdma -> vfio-pci
00:06:15.151  0000:00:04.2 (8086 6f22): ioatdma -> vfio-pci
00:06:15.151  0000:00:04.1 (8086 6f21): ioatdma -> vfio-pci
00:06:15.151  0000:00:04.0 (8086 6f20): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.7 (8086 6f27): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.6 (8086 6f26): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.5 (8086 6f25): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.4 (8086 6f24): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.3 (8086 6f23): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.2 (8086 6f22): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.1 (8086 6f21): ioatdma -> vfio-pci
00:06:15.151  0000:80:04.0 (8086 6f20): ioatdma -> vfio-pci
00:06:16.088  0000:0d:00.0 (8086 0a54): nvme -> vfio-pci
00:06:16.348   18:27:02  -- common/autotest_common.sh@1517 -- # sleep 1
00:06:17.282   18:27:03  -- common/autotest_common.sh@1518 -- # bdfs=()
00:06:17.282   18:27:03  -- common/autotest_common.sh@1518 -- # local bdfs
00:06:17.282   18:27:03  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:06:17.282    18:27:03  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:06:17.282    18:27:03  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:17.282    18:27:03  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:17.282    18:27:03  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:17.282     18:27:03  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:06:17.282     18:27:03  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:17.540    18:27:03  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:06:17.540    18:27:03  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:06:17.540   18:27:03  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:06:18.477  Waiting for block devices as requested
00:06:18.477  0000:00:04.7 (8086 6f27): vfio-pci -> ioatdma
00:06:18.735  0000:00:04.6 (8086 6f26): vfio-pci -> ioatdma
00:06:18.735  0000:00:04.5 (8086 6f25): vfio-pci -> ioatdma
00:06:18.735  0000:00:04.4 (8086 6f24): vfio-pci -> ioatdma
00:06:18.735  0000:00:04.3 (8086 6f23): vfio-pci -> ioatdma
00:06:18.994  0000:00:04.2 (8086 6f22): vfio-pci -> ioatdma
00:06:18.994  0000:00:04.1 (8086 6f21): vfio-pci -> ioatdma
00:06:18.994  0000:00:04.0 (8086 6f20): vfio-pci -> ioatdma
00:06:18.994  0000:80:04.7 (8086 6f27): vfio-pci -> ioatdma
00:06:19.253  0000:80:04.6 (8086 6f26): vfio-pci -> ioatdma
00:06:19.253  0000:80:04.5 (8086 6f25): vfio-pci -> ioatdma
00:06:19.253  0000:80:04.4 (8086 6f24): vfio-pci -> ioatdma
00:06:19.253  0000:80:04.3 (8086 6f23): vfio-pci -> ioatdma
00:06:19.515  0000:80:04.2 (8086 6f22): vfio-pci -> ioatdma
00:06:19.515  0000:80:04.1 (8086 6f21): vfio-pci -> ioatdma
00:06:19.515  0000:80:04.0 (8086 6f20): vfio-pci -> ioatdma
00:06:19.515  0000:0d:00.0 (8086 0a54): vfio-pci -> nvme
00:06:19.775   18:27:06  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:19.775    18:27:06  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0d:00.0
00:06:19.775     18:27:06  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:06:19.775     18:27:06  -- common/autotest_common.sh@1487 -- # grep 0000:0d:00.0/nvme/nvme
00:06:19.775    18:27:06  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0
00:06:19.775    18:27:06  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0 ]]
00:06:19.775     18:27:06  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0
00:06:19.775    18:27:06  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:06:19.775   18:27:06  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:06:19.775   18:27:06  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:06:19.775    18:27:06  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:06:19.775    18:27:06  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:19.775    18:27:06  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:19.775   18:27:06  -- common/autotest_common.sh@1531 -- # oacs=' 0xf'
00:06:19.775   18:27:06  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:19.775   18:27:06  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:19.775    18:27:06  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:19.775    18:27:06  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:06:19.775    18:27:06  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:19.775   18:27:06  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:19.775   18:27:06  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:19.775   18:27:06  -- common/autotest_common.sh@1543 -- # continue
00:06:19.775   18:27:06  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:06:19.775   18:27:06  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:19.775   18:27:06  -- common/autotest_common.sh@10 -- # set +x
00:06:19.775   18:27:06  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:06:19.775   18:27:06  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:19.775   18:27:06  -- common/autotest_common.sh@10 -- # set +x
00:06:19.775   18:27:06  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:06:21.154  0000:00:04.7 (8086 6f27): ioatdma -> vfio-pci
00:06:21.154  0000:00:04.6 (8086 6f26): ioatdma -> vfio-pci
00:06:21.154  0000:00:04.5 (8086 6f25): ioatdma -> vfio-pci
00:06:21.154  0000:00:04.4 (8086 6f24): ioatdma -> vfio-pci
00:06:21.154  0000:00:04.3 (8086 6f23): ioatdma -> vfio-pci
00:06:21.154  0000:00:04.2 (8086 6f22): ioatdma -> vfio-pci
00:06:21.154  0000:00:04.1 (8086 6f21): ioatdma -> vfio-pci
00:06:21.154  0000:00:04.0 (8086 6f20): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.7 (8086 6f27): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.6 (8086 6f26): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.5 (8086 6f25): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.4 (8086 6f24): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.3 (8086 6f23): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.2 (8086 6f22): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.1 (8086 6f21): ioatdma -> vfio-pci
00:06:21.154  0000:80:04.0 (8086 6f20): ioatdma -> vfio-pci
00:06:22.091  0000:0d:00.0 (8086 0a54): nvme -> vfio-pci
00:06:22.350   18:27:08  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:22.350   18:27:08  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:22.350   18:27:08  -- common/autotest_common.sh@10 -- # set +x
00:06:22.350   18:27:08  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:22.350   18:27:08  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:06:22.350    18:27:08  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:06:22.350    18:27:08  -- common/autotest_common.sh@1563 -- # bdfs=()
00:06:22.350    18:27:08  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:06:22.350    18:27:08  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:06:22.350    18:27:08  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:06:22.350     18:27:08  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:06:22.350     18:27:08  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:22.350     18:27:08  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:22.350     18:27:08  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:22.350      18:27:08  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:06:22.350      18:27:08  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:22.350     18:27:08  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:06:22.350     18:27:08  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:06:22.350    18:27:08  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:22.350     18:27:08  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0d:00.0/device
00:06:22.350    18:27:08  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:06:22.350    18:27:08  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:06:22.350    18:27:08  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:06:22.350    18:27:08  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:06:22.350    18:27:08  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0d:00.0
00:06:22.350   18:27:08  -- common/autotest_common.sh@1579 -- # [[ -z 0000:0d:00.0 ]]
00:06:22.350   18:27:08  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=378336
00:06:22.350   18:27:08  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:22.350   18:27:08  -- common/autotest_common.sh@1585 -- # waitforlisten 378336
00:06:22.350   18:27:08  -- common/autotest_common.sh@835 -- # '[' -z 378336 ']'
00:06:22.350   18:27:08  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:22.350   18:27:08  -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:22.350   18:27:08  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:22.350  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:22.350   18:27:08  -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:22.350   18:27:08  -- common/autotest_common.sh@10 -- # set +x
00:06:22.350  [2024-11-17 18:27:08.908388] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:22.350  [2024-11-17 18:27:08.908536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378336 ]
00:06:22.609  [2024-11-17 18:27:09.018501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:22.609  [2024-11-17 18:27:09.053859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:23.176   18:27:09  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:23.176   18:27:09  -- common/autotest_common.sh@868 -- # return 0
00:06:23.176   18:27:09  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:06:23.176   18:27:09  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:06:23.176   18:27:09  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0d:00.0
00:06:26.468  nvme0n1
00:06:26.468   18:27:12  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:06:26.468  [2024-11-17 18:27:13.002381] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18
00:06:26.468  [2024-11-17 18:27:13.002444] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18
00:06:26.468  request:
00:06:26.468  {
00:06:26.468    "nvme_ctrlr_name": "nvme0",
00:06:26.468    "password": "test",
00:06:26.468    "method": "bdev_nvme_opal_revert",
00:06:26.468    "req_id": 1
00:06:26.468  }
00:06:26.468  Got JSON-RPC error response
00:06:26.468  response:
00:06:26.468  {
00:06:26.468    "code": -32603,
00:06:26.468    "message": "Internal error"
00:06:26.468  }
00:06:26.468   18:27:13  -- common/autotest_common.sh@1591 -- # true
00:06:26.468   18:27:13  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:06:26.468   18:27:13  -- common/autotest_common.sh@1595 -- # killprocess 378336
00:06:26.468   18:27:13  -- common/autotest_common.sh@954 -- # '[' -z 378336 ']'
00:06:26.468   18:27:13  -- common/autotest_common.sh@958 -- # kill -0 378336
00:06:26.468    18:27:13  -- common/autotest_common.sh@959 -- # uname
00:06:26.468   18:27:13  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:26.468    18:27:13  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378336
00:06:26.727   18:27:13  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:26.727   18:27:13  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:26.727   18:27:13  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378336'
00:06:26.727  killing process with pid 378336
00:06:26.727   18:27:13  -- common/autotest_common.sh@973 -- # kill 378336
00:06:26.727   18:27:13  -- common/autotest_common.sh@978 -- # wait 378336
00:06:28.634   18:27:14  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:06:28.634   18:27:14  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:06:28.634   18:27:14  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:28.634   18:27:14  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:28.634   18:27:14  -- spdk/autotest.sh@149 -- # timing_enter lib
00:06:28.634   18:27:14  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:28.634   18:27:14  -- common/autotest_common.sh@10 -- # set +x
00:06:28.634   18:27:14  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:06:28.634   18:27:14  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:06:28.634   18:27:14  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:28.634   18:27:14  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:28.634   18:27:14  -- common/autotest_common.sh@10 -- # set +x
00:06:28.634  ************************************
00:06:28.634  START TEST env
00:06:28.634  ************************************
00:06:28.634   18:27:14 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:06:28.634  * Looking for test storage...
00:06:28.634  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env
00:06:28.634    18:27:14 env -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:28.634     18:27:14 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:28.634     18:27:14 env -- common/autotest_common.sh@1693 -- # lcov --version
00:06:28.634    18:27:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:28.634    18:27:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:28.634    18:27:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:28.634    18:27:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:28.634    18:27:14 env -- scripts/common.sh@336 -- # IFS=.-:
00:06:28.634    18:27:14 env -- scripts/common.sh@336 -- # read -ra ver1
00:06:28.634    18:27:14 env -- scripts/common.sh@337 -- # IFS=.-:
00:06:28.634    18:27:14 env -- scripts/common.sh@337 -- # read -ra ver2
00:06:28.634    18:27:14 env -- scripts/common.sh@338 -- # local 'op=<'
00:06:28.634    18:27:14 env -- scripts/common.sh@340 -- # ver1_l=2
00:06:28.634    18:27:14 env -- scripts/common.sh@341 -- # ver2_l=1
00:06:28.634    18:27:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:28.634    18:27:14 env -- scripts/common.sh@344 -- # case "$op" in
00:06:28.634    18:27:14 env -- scripts/common.sh@345 -- # : 1
00:06:28.634    18:27:14 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:28.634    18:27:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:28.634     18:27:14 env -- scripts/common.sh@365 -- # decimal 1
00:06:28.634     18:27:14 env -- scripts/common.sh@353 -- # local d=1
00:06:28.634     18:27:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:28.634     18:27:14 env -- scripts/common.sh@355 -- # echo 1
00:06:28.634    18:27:14 env -- scripts/common.sh@365 -- # ver1[v]=1
00:06:28.634     18:27:14 env -- scripts/common.sh@366 -- # decimal 2
00:06:28.634     18:27:14 env -- scripts/common.sh@353 -- # local d=2
00:06:28.634     18:27:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:28.634     18:27:14 env -- scripts/common.sh@355 -- # echo 2
00:06:28.634    18:27:14 env -- scripts/common.sh@366 -- # ver2[v]=2
00:06:28.634    18:27:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:28.634    18:27:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:28.634    18:27:14 env -- scripts/common.sh@368 -- # return 0
00:06:28.634    18:27:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:28.634    18:27:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:28.634  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.634  		--rc genhtml_branch_coverage=1
00:06:28.634  		--rc genhtml_function_coverage=1
00:06:28.634  		--rc genhtml_legend=1
00:06:28.634  		--rc geninfo_all_blocks=1
00:06:28.634  		--rc geninfo_unexecuted_blocks=1
00:06:28.634  		
00:06:28.634  		'
00:06:28.634    18:27:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:28.634  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.634  		--rc genhtml_branch_coverage=1
00:06:28.634  		--rc genhtml_function_coverage=1
00:06:28.634  		--rc genhtml_legend=1
00:06:28.634  		--rc geninfo_all_blocks=1
00:06:28.634  		--rc geninfo_unexecuted_blocks=1
00:06:28.634  		
00:06:28.634  		'
00:06:28.634    18:27:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:28.634  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.634  		--rc genhtml_branch_coverage=1
00:06:28.634  		--rc genhtml_function_coverage=1
00:06:28.634  		--rc genhtml_legend=1
00:06:28.634  		--rc geninfo_all_blocks=1
00:06:28.634  		--rc geninfo_unexecuted_blocks=1
00:06:28.634  		
00:06:28.634  		'
00:06:28.634    18:27:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:28.634  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.634  		--rc genhtml_branch_coverage=1
00:06:28.634  		--rc genhtml_function_coverage=1
00:06:28.634  		--rc genhtml_legend=1
00:06:28.634  		--rc geninfo_all_blocks=1
00:06:28.634  		--rc geninfo_unexecuted_blocks=1
00:06:28.634  		
00:06:28.634  		'
00:06:28.634   18:27:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:06:28.634   18:27:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:28.634   18:27:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:28.635   18:27:14 env -- common/autotest_common.sh@10 -- # set +x
00:06:28.635  ************************************
00:06:28.635  START TEST env_memory
00:06:28.635  ************************************
00:06:28.635   18:27:14 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:06:28.635  
00:06:28.635  
00:06:28.635       CUnit - A unit testing framework for C - Version 2.1-3
00:06:28.635       http://cunit.sourceforge.net/
00:06:28.635  
00:06:28.635  
00:06:28.635  Suite: memory
00:06:28.635    Test: alloc and free memory map ...[2024-11-17 18:27:15.027163] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:06:28.635  passed
00:06:28.635    Test: mem map translation ...[2024-11-17 18:27:15.066995] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:06:28.635  [2024-11-17 18:27:15.067026] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:06:28.635  [2024-11-17 18:27:15.067099] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:06:28.635  [2024-11-17 18:27:15.067118] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:06:28.635  passed
00:06:28.635    Test: mem map registration ...[2024-11-17 18:27:15.128182] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:06:28.635  [2024-11-17 18:27:15.128211] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:06:28.635  passed
00:06:28.895    Test: mem map adjacent registrations ...passed
00:06:28.895  
00:06:28.895  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:28.895                suites      1      1    n/a      0        0
00:06:28.895                 tests      4      4      4      0        0
00:06:28.895               asserts    152    152    152      0      n/a
00:06:28.895  
00:06:28.895  Elapsed time =    0.219 seconds
00:06:28.895  
00:06:28.895  real	0m0.235s
00:06:28.895  user	0m0.224s
00:06:28.895  sys	0m0.011s
00:06:28.895   18:27:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:28.895   18:27:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:06:28.895  ************************************
00:06:28.896  END TEST env_memory
00:06:28.896  ************************************
00:06:28.896   18:27:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:06:28.896   18:27:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:28.896   18:27:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:28.896   18:27:15 env -- common/autotest_common.sh@10 -- # set +x
00:06:28.896  ************************************
00:06:28.896  START TEST env_vtophys
00:06:28.896  ************************************
00:06:28.896   18:27:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:06:28.896  EAL: lib.eal log level changed from notice to debug
00:06:28.896  EAL: Detected lcore 0 as core 0 on socket 0
00:06:28.896  EAL: Detected lcore 1 as core 1 on socket 0
00:06:28.896  EAL: Detected lcore 2 as core 2 on socket 0
00:06:28.896  EAL: Detected lcore 3 as core 3 on socket 0
00:06:28.896  EAL: Detected lcore 4 as core 4 on socket 0
00:06:28.896  EAL: Detected lcore 5 as core 5 on socket 0
00:06:28.896  EAL: Detected lcore 6 as core 8 on socket 0
00:06:28.896  EAL: Detected lcore 7 as core 9 on socket 0
00:06:28.896  EAL: Detected lcore 8 as core 10 on socket 0
00:06:28.896  EAL: Detected lcore 9 as core 11 on socket 0
00:06:28.896  EAL: Detected lcore 10 as core 12 on socket 0
00:06:28.896  EAL: Detected lcore 11 as core 16 on socket 0
00:06:28.896  EAL: Detected lcore 12 as core 17 on socket 0
00:06:28.896  EAL: Detected lcore 13 as core 18 on socket 0
00:06:28.896  EAL: Detected lcore 14 as core 19 on socket 0
00:06:28.896  EAL: Detected lcore 15 as core 20 on socket 0
00:06:28.896  EAL: Detected lcore 16 as core 21 on socket 0
00:06:28.896  EAL: Detected lcore 17 as core 24 on socket 0
00:06:28.896  EAL: Detected lcore 18 as core 25 on socket 0
00:06:28.896  EAL: Detected lcore 19 as core 26 on socket 0
00:06:28.896  EAL: Detected lcore 20 as core 27 on socket 0
00:06:28.896  EAL: Detected lcore 21 as core 28 on socket 0
00:06:28.896  EAL: Detected lcore 22 as core 0 on socket 1
00:06:28.896  EAL: Detected lcore 23 as core 1 on socket 1
00:06:28.896  EAL: Detected lcore 24 as core 2 on socket 1
00:06:28.896  EAL: Detected lcore 25 as core 3 on socket 1
00:06:28.896  EAL: Detected lcore 26 as core 4 on socket 1
00:06:28.896  EAL: Detected lcore 27 as core 5 on socket 1
00:06:28.896  EAL: Detected lcore 28 as core 8 on socket 1
00:06:28.896  EAL: Detected lcore 29 as core 9 on socket 1
00:06:28.896  EAL: Detected lcore 30 as core 10 on socket 1
00:06:28.896  EAL: Detected lcore 31 as core 11 on socket 1
00:06:28.896  EAL: Detected lcore 32 as core 12 on socket 1
00:06:28.896  EAL: Detected lcore 33 as core 16 on socket 1
00:06:28.896  EAL: Detected lcore 34 as core 17 on socket 1
00:06:28.896  EAL: Detected lcore 35 as core 18 on socket 1
00:06:28.896  EAL: Detected lcore 36 as core 19 on socket 1
00:06:28.896  EAL: Detected lcore 37 as core 20 on socket 1
00:06:28.896  EAL: Detected lcore 38 as core 21 on socket 1
00:06:28.896  EAL: Detected lcore 39 as core 24 on socket 1
00:06:28.896  EAL: Detected lcore 40 as core 25 on socket 1
00:06:28.896  EAL: Detected lcore 41 as core 26 on socket 1
00:06:28.896  EAL: Detected lcore 42 as core 27 on socket 1
00:06:28.896  EAL: Detected lcore 43 as core 28 on socket 1
00:06:28.896  EAL: Detected lcore 44 as core 0 on socket 0
00:06:28.896  EAL: Detected lcore 45 as core 1 on socket 0
00:06:28.896  EAL: Detected lcore 46 as core 2 on socket 0
00:06:28.896  EAL: Detected lcore 47 as core 3 on socket 0
00:06:28.896  EAL: Detected lcore 48 as core 4 on socket 0
00:06:28.896  EAL: Detected lcore 49 as core 5 on socket 0
00:06:28.896  EAL: Detected lcore 50 as core 8 on socket 0
00:06:28.896  EAL: Detected lcore 51 as core 9 on socket 0
00:06:28.896  EAL: Detected lcore 52 as core 10 on socket 0
00:06:28.896  EAL: Detected lcore 53 as core 11 on socket 0
00:06:28.896  EAL: Detected lcore 54 as core 12 on socket 0
00:06:28.896  EAL: Detected lcore 55 as core 16 on socket 0
00:06:28.896  EAL: Detected lcore 56 as core 17 on socket 0
00:06:28.896  EAL: Detected lcore 57 as core 18 on socket 0
00:06:28.896  EAL: Detected lcore 58 as core 19 on socket 0
00:06:28.896  EAL: Detected lcore 59 as core 20 on socket 0
00:06:28.896  EAL: Detected lcore 60 as core 21 on socket 0
00:06:28.896  EAL: Detected lcore 61 as core 24 on socket 0
00:06:28.896  EAL: Detected lcore 62 as core 25 on socket 0
00:06:28.896  EAL: Detected lcore 63 as core 26 on socket 0
00:06:28.896  EAL: Detected lcore 64 as core 27 on socket 0
00:06:28.896  EAL: Detected lcore 65 as core 28 on socket 0
00:06:28.896  EAL: Detected lcore 66 as core 0 on socket 1
00:06:28.896  EAL: Detected lcore 67 as core 1 on socket 1
00:06:28.896  EAL: Detected lcore 68 as core 2 on socket 1
00:06:28.896  EAL: Detected lcore 69 as core 3 on socket 1
00:06:28.896  EAL: Detected lcore 70 as core 4 on socket 1
00:06:28.896  EAL: Detected lcore 71 as core 5 on socket 1
00:06:28.896  EAL: Detected lcore 72 as core 8 on socket 1
00:06:28.896  EAL: Detected lcore 73 as core 9 on socket 1
00:06:28.896  EAL: Detected lcore 74 as core 10 on socket 1
00:06:28.896  EAL: Detected lcore 75 as core 11 on socket 1
00:06:28.896  EAL: Detected lcore 76 as core 12 on socket 1
00:06:28.896  EAL: Detected lcore 77 as core 16 on socket 1
00:06:28.896  EAL: Detected lcore 78 as core 17 on socket 1
00:06:28.896  EAL: Detected lcore 79 as core 18 on socket 1
00:06:28.896  EAL: Detected lcore 80 as core 19 on socket 1
00:06:28.896  EAL: Detected lcore 81 as core 20 on socket 1
00:06:28.896  EAL: Detected lcore 82 as core 21 on socket 1
00:06:28.896  EAL: Detected lcore 83 as core 24 on socket 1
00:06:28.896  EAL: Detected lcore 84 as core 25 on socket 1
00:06:28.896  EAL: Detected lcore 85 as core 26 on socket 1
00:06:28.896  EAL: Detected lcore 86 as core 27 on socket 1
00:06:28.896  EAL: Detected lcore 87 as core 28 on socket 1
00:06:28.896  EAL: Maximum logical cores by configuration: 128
00:06:28.896  EAL: Detected CPU lcores: 88
00:06:28.896  EAL: Detected NUMA nodes: 2
00:06:28.896  EAL: Checking presence of .so 'librte_eal.so.24.0'
00:06:28.896  EAL: Detected shared linkage of DPDK
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_auxiliary.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_mlx5.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_qat.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0
00:06:28.896  EAL: pmd.net.i40e.init log level changed from disabled to notice
00:06:28.896  EAL: pmd.net.i40e.driver log level changed from disabled to notice
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_ipsec_mb.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_mlx5.so.24.0
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_auxiliary.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_mlx5.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_common_qat.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_ipsec_mb.so
00:06:28.896  EAL: open shared lib /var/jenkins/workspace/vfio-user-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_crypto_mlx5.so
00:06:28.896  EAL: No shared files mode enabled, IPC will be disabled
00:06:28.896  EAL: No shared files mode enabled, IPC is disabled
00:06:28.896  EAL: Bus pci wants IOVA as 'DC'
00:06:28.896  EAL: Bus auxiliary wants IOVA as 'DC'
00:06:28.896  EAL: Bus vdev wants IOVA as 'DC'
00:06:28.896  EAL: Buses did not request a specific IOVA mode.
00:06:28.896  EAL: IOMMU is available, selecting IOVA as VA mode.
00:06:28.896  EAL: Selected IOVA mode 'VA'
00:06:28.896  EAL: Probing VFIO support...
00:06:28.896  EAL: IOMMU type 1 (Type 1) is supported
00:06:28.896  EAL: IOMMU type 7 (sPAPR) is not supported
00:06:28.896  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:06:28.896  EAL: VFIO support initialized
00:06:28.896  EAL: Ask a virtual area of 0x2e000 bytes
00:06:28.896  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:06:28.896  EAL: Setting up physically contiguous memory...
00:06:28.896  EAL: Setting maximum number of open files to 524288
00:06:28.896  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:06:28.896  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:06:28.896  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:06:28.896  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.896  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:06:28.896  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:28.896  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.896  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:06:28.896  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:06:28.896  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.896  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:06:28.896  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:28.896  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.896  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:06:28.896  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:06:28.896  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.896  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:06:28.896  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:28.896  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.896  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:06:28.896  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:06:28.896  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.896  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:06:28.896  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:28.896  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.896  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:06:28.897  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:06:28.897  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:06:28.897  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.897  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:06:28.897  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:28.897  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.897  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:06:28.897  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:06:28.897  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.897  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:06:28.897  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:28.897  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.897  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:06:28.897  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:06:28.897  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.897  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:06:28.897  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:28.897  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.897  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:06:28.897  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:06:28.897  EAL: Ask a virtual area of 0x61000 bytes
00:06:28.897  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:06:28.897  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:28.897  EAL: Ask a virtual area of 0x400000000 bytes
00:06:28.897  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:06:28.897  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:06:28.897  EAL: Hugepages will be freed exactly as allocated.
00:06:28.897  EAL: No shared files mode enabled, IPC is disabled
00:06:28.897  EAL: No shared files mode enabled, IPC is disabled
00:06:28.897  EAL: TSC frequency is ~2200000 KHz
00:06:28.897  EAL: Main lcore 0 is ready (tid=7fd7b703bb40;cpuset=[0])
00:06:28.897  EAL: Trying to obtain current memory policy.
00:06:28.897  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:28.897  EAL: Restoring previous memory policy: 0
00:06:28.897  EAL: request: mp_malloc_sync
00:06:28.897  EAL: No shared files mode enabled, IPC is disabled
00:06:28.897  EAL: Heap on socket 0 was expanded by 2MB
00:06:28.897  EAL: No shared files mode enabled, IPC is disabled
00:06:28.897  EAL: No shared files mode enabled, IPC is disabled
00:06:28.897  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:06:28.897  EAL: Mem event callback 'spdk:(nil)' registered
00:06:28.897  
00:06:28.897  
00:06:28.897       CUnit - A unit testing framework for C - Version 2.1-3
00:06:28.897       http://cunit.sourceforge.net/
00:06:28.897  
00:06:28.897  
00:06:28.897  Suite: components_suite
00:06:29.156    Test: vtophys_malloc_test ...passed
00:06:29.157    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:06:29.157  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.157  EAL: Restoring previous memory policy: 4
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was expanded by 4MB
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was shrunk by 4MB
00:06:29.157  EAL: Trying to obtain current memory policy.
00:06:29.157  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.157  EAL: Restoring previous memory policy: 4
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was expanded by 6MB
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was shrunk by 6MB
00:06:29.157  EAL: Trying to obtain current memory policy.
00:06:29.157  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.157  EAL: Restoring previous memory policy: 4
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was expanded by 10MB
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was shrunk by 10MB
00:06:29.157  EAL: Trying to obtain current memory policy.
00:06:29.157  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.157  EAL: Restoring previous memory policy: 4
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was expanded by 18MB
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was shrunk by 18MB
00:06:29.157  EAL: Trying to obtain current memory policy.
00:06:29.157  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.157  EAL: Restoring previous memory policy: 4
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was expanded by 34MB
00:06:29.157  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.157  EAL: request: mp_malloc_sync
00:06:29.157  EAL: No shared files mode enabled, IPC is disabled
00:06:29.157  EAL: Heap on socket 0 was shrunk by 34MB
00:06:29.157  EAL: Trying to obtain current memory policy.
00:06:29.157  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.415  EAL: Restoring previous memory policy: 4
00:06:29.415  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.415  EAL: request: mp_malloc_sync
00:06:29.415  EAL: No shared files mode enabled, IPC is disabled
00:06:29.415  EAL: Heap on socket 0 was expanded by 66MB
00:06:29.415  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.415  EAL: request: mp_malloc_sync
00:06:29.415  EAL: No shared files mode enabled, IPC is disabled
00:06:29.415  EAL: Heap on socket 0 was shrunk by 66MB
00:06:29.415  EAL: Trying to obtain current memory policy.
00:06:29.415  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.415  EAL: Restoring previous memory policy: 4
00:06:29.415  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.415  EAL: request: mp_malloc_sync
00:06:29.415  EAL: No shared files mode enabled, IPC is disabled
00:06:29.415  EAL: Heap on socket 0 was expanded by 130MB
00:06:29.415  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.415  EAL: request: mp_malloc_sync
00:06:29.415  EAL: No shared files mode enabled, IPC is disabled
00:06:29.415  EAL: Heap on socket 0 was shrunk by 130MB
00:06:29.415  EAL: Trying to obtain current memory policy.
00:06:29.415  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.415  EAL: Restoring previous memory policy: 4
00:06:29.415  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.415  EAL: request: mp_malloc_sync
00:06:29.415  EAL: No shared files mode enabled, IPC is disabled
00:06:29.415  EAL: Heap on socket 0 was expanded by 258MB
00:06:29.415  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.673  EAL: request: mp_malloc_sync
00:06:29.673  EAL: No shared files mode enabled, IPC is disabled
00:06:29.673  EAL: Heap on socket 0 was shrunk by 258MB
00:06:29.673  EAL: Trying to obtain current memory policy.
00:06:29.673  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:29.673  EAL: Restoring previous memory policy: 4
00:06:29.673  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.673  EAL: request: mp_malloc_sync
00:06:29.673  EAL: No shared files mode enabled, IPC is disabled
00:06:29.673  EAL: Heap on socket 0 was expanded by 514MB
00:06:29.673  EAL: Calling mem event callback 'spdk:(nil)'
00:06:29.932  EAL: request: mp_malloc_sync
00:06:29.932  EAL: No shared files mode enabled, IPC is disabled
00:06:29.932  EAL: Heap on socket 0 was shrunk by 514MB
00:06:29.932  EAL: Trying to obtain current memory policy.
00:06:29.932  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:30.192  EAL: Restoring previous memory policy: 4
00:06:30.192  EAL: Calling mem event callback 'spdk:(nil)'
00:06:30.192  EAL: request: mp_malloc_sync
00:06:30.192  EAL: No shared files mode enabled, IPC is disabled
00:06:30.192  EAL: Heap on socket 0 was expanded by 1026MB
00:06:30.451  EAL: Calling mem event callback 'spdk:(nil)'
00:06:30.709  EAL: request: mp_malloc_sync
00:06:30.709  EAL: No shared files mode enabled, IPC is disabled
00:06:30.709  EAL: Heap on socket 0 was shrunk by 1026MB
00:06:30.709  passed
00:06:30.709  
00:06:30.709  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:30.709                suites      1      1    n/a      0        0
00:06:30.709                 tests      2      2      2      0        0
00:06:30.709               asserts    497    497    497      0      n/a
00:06:30.709  
00:06:30.709  Elapsed time =    1.662 seconds
00:06:30.709  EAL: Calling mem event callback 'spdk:(nil)'
00:06:30.709  EAL: request: mp_malloc_sync
00:06:30.709  EAL: No shared files mode enabled, IPC is disabled
00:06:30.709  EAL: Heap on socket 0 was shrunk by 2MB
00:06:30.709  EAL: No shared files mode enabled, IPC is disabled
00:06:30.709  EAL: No shared files mode enabled, IPC is disabled
00:06:30.709  EAL: No shared files mode enabled, IPC is disabled
00:06:30.709  
00:06:30.709  real	0m1.848s
00:06:30.709  user	0m0.986s
00:06:30.709  sys	0m0.826s
00:06:30.709   18:27:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:30.709   18:27:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:06:30.709  ************************************
00:06:30.709  END TEST env_vtophys
00:06:30.709  ************************************
00:06:30.709   18:27:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:06:30.709   18:27:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:30.709   18:27:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:30.709   18:27:17 env -- common/autotest_common.sh@10 -- # set +x
00:06:30.709  ************************************
00:06:30.709  START TEST env_pci
00:06:30.709  ************************************
00:06:30.709   18:27:17 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:06:30.709  
00:06:30.709  
00:06:30.709       CUnit - A unit testing framework for C - Version 2.1-3
00:06:30.709       http://cunit.sourceforge.net/
00:06:30.709  
00:06:30.709  
00:06:30.709  Suite: pci
00:06:30.709    Test: pci_hook ...[2024-11-17 18:27:17.168571] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 379874 has claimed it
00:06:30.709  EAL: Cannot find device (10000:00:01.0)
00:06:30.709  EAL: Failed to attach device on primary process
00:06:30.709  passed
00:06:30.709  
00:06:30.709  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:30.709                suites      1      1    n/a      0        0
00:06:30.709                 tests      1      1      1      0        0
00:06:30.709               asserts     25     25     25      0      n/a
00:06:30.709  
00:06:30.709  Elapsed time =    0.028 seconds
00:06:30.709  
00:06:30.709  real	0m0.066s
00:06:30.709  user	0m0.027s
00:06:30.709  sys	0m0.039s
00:06:30.709   18:27:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:30.709   18:27:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:06:30.709  ************************************
00:06:30.709  END TEST env_pci
00:06:30.709  ************************************
00:06:30.709   18:27:17 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:06:30.709    18:27:17 env -- env/env.sh@15 -- # uname
00:06:30.709   18:27:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:06:30.709   18:27:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:06:30.709   18:27:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:30.709   18:27:17 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:30.709   18:27:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:30.709   18:27:17 env -- common/autotest_common.sh@10 -- # set +x
00:06:30.709  ************************************
00:06:30.709  START TEST env_dpdk_post_init
00:06:30.709  ************************************
00:06:30.709   18:27:17 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:30.709  EAL: Detected CPU lcores: 88
00:06:30.709  EAL: Detected NUMA nodes: 2
00:06:30.709  EAL: Detected shared linkage of DPDK
00:06:30.966  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:30.967  EAL: Selected IOVA mode 'VA'
00:06:30.967  EAL: VFIO support initialized
00:06:30.967  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:30.967  EAL: Using IOMMU type 1 (Type 1)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f20) device: 0000:00:04.0 (socket 0)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f21) device: 0000:00:04.1 (socket 0)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f22) device: 0000:00:04.2 (socket 0)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f23) device: 0000:00:04.3 (socket 0)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f24) device: 0000:00:04.4 (socket 0)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f25) device: 0000:00:04.5 (socket 0)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f26) device: 0000:00:04.6 (socket 0)
00:06:30.967  EAL: Ignore mapping IO port bar(1)
00:06:30.967  EAL: Probe PCI driver: spdk_ioat (8086:6f27) device: 0000:00:04.7 (socket 0)
00:06:31.902  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0d:00.0 (socket 0)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f20) device: 0000:80:04.0 (socket 1)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f21) device: 0000:80:04.1 (socket 1)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f22) device: 0000:80:04.2 (socket 1)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f23) device: 0000:80:04.3 (socket 1)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f24) device: 0000:80:04.4 (socket 1)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f25) device: 0000:80:04.5 (socket 1)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f26) device: 0000:80:04.6 (socket 1)
00:06:31.902  EAL: Ignore mapping IO port bar(1)
00:06:31.902  EAL: Probe PCI driver: spdk_ioat (8086:6f27) device: 0000:80:04.7 (socket 1)
00:06:35.191  EAL: Releasing PCI mapped resource for 0000:0d:00.0
00:06:35.191  EAL: Calling pci_unmap_resource for 0000:0d:00.0 at 0x202001020000
00:06:35.191  Starting DPDK initialization...
00:06:35.191  Starting SPDK post initialization...
00:06:35.191  SPDK NVMe probe
00:06:35.191  Attaching to 0000:0d:00.0
00:06:35.191  Attached to 0000:0d:00.0
00:06:35.191  Cleaning up...
00:06:35.191  
00:06:35.191  real	0m4.492s
00:06:35.191  user	0m3.336s
00:06:35.191  sys	0m0.216s
00:06:35.191   18:27:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:35.191   18:27:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:06:35.191  ************************************
00:06:35.191  END TEST env_dpdk_post_init
00:06:35.191  ************************************
00:06:35.191    18:27:21 env -- env/env.sh@26 -- # uname
00:06:35.191   18:27:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:06:35.191   18:27:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:06:35.191   18:27:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:35.191   18:27:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:35.191   18:27:21 env -- common/autotest_common.sh@10 -- # set +x
00:06:35.450  ************************************
00:06:35.450  START TEST env_mem_callbacks
00:06:35.450  ************************************
00:06:35.450   18:27:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:06:35.450  EAL: Detected CPU lcores: 88
00:06:35.450  EAL: Detected NUMA nodes: 2
00:06:35.450  EAL: Detected shared linkage of DPDK
00:06:35.450  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:35.450  EAL: Selected IOVA mode 'VA'
00:06:35.450  EAL: VFIO support initialized
00:06:35.450  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:35.450  
00:06:35.450  
00:06:35.450       CUnit - A unit testing framework for C - Version 2.1-3
00:06:35.450       http://cunit.sourceforge.net/
00:06:35.450  
00:06:35.450  
00:06:35.450  Suite: memory
00:06:35.450    Test: test ...
00:06:35.450  register 0x200000200000 2097152
00:06:35.450  malloc 3145728
00:06:35.450  register 0x200000400000 4194304
00:06:35.450  buf 0x200000500000 len 3145728 PASSED
00:06:35.450  malloc 64
00:06:35.450  buf 0x2000004fff40 len 64 PASSED
00:06:35.450  malloc 4194304
00:06:35.450  register 0x200000800000 6291456
00:06:35.450  buf 0x200000a00000 len 4194304 PASSED
00:06:35.450  free 0x200000500000 3145728
00:06:35.450  free 0x2000004fff40 64
00:06:35.450  unregister 0x200000400000 4194304 PASSED
00:06:35.450  free 0x200000a00000 4194304
00:06:35.450  unregister 0x200000800000 6291456 PASSED
00:06:35.450  malloc 8388608
00:06:35.450  register 0x200000400000 10485760
00:06:35.450  buf 0x200000600000 len 8388608 PASSED
00:06:35.450  free 0x200000600000 8388608
00:06:35.450  unregister 0x200000400000 10485760 PASSED
00:06:35.450  passed
00:06:35.450  
00:06:35.450  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:35.450                suites      1      1    n/a      0        0
00:06:35.450                 tests      1      1      1      0        0
00:06:35.450               asserts     15     15     15      0      n/a
00:06:35.450  
00:06:35.450  Elapsed time =    0.006 seconds
00:06:35.450  
00:06:35.450  real	0m0.089s
00:06:35.450  user	0m0.031s
00:06:35.450  sys	0m0.058s
00:06:35.450   18:27:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:35.450   18:27:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:06:35.450  ************************************
00:06:35.450  END TEST env_mem_callbacks
00:06:35.450  ************************************
00:06:35.450  
00:06:35.450  real	0m7.043s
00:06:35.450  user	0m4.763s
00:06:35.450  sys	0m1.323s
00:06:35.450   18:27:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:35.450   18:27:21 env -- common/autotest_common.sh@10 -- # set +x
00:06:35.450  ************************************
00:06:35.450  END TEST env
00:06:35.450  ************************************
00:06:35.450   18:27:21  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:06:35.450   18:27:21  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:35.450   18:27:21  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:35.450   18:27:21  -- common/autotest_common.sh@10 -- # set +x
00:06:35.450  ************************************
00:06:35.450  START TEST rpc
00:06:35.450  ************************************
00:06:35.450   18:27:21 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:06:35.450  * Looking for test storage...
00:06:35.450  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:06:35.450    18:27:21 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:35.450     18:27:21 rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:06:35.450     18:27:21 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:35.710    18:27:22 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:35.710    18:27:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:35.710    18:27:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:35.710    18:27:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:35.710    18:27:22 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:35.710    18:27:22 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:35.710    18:27:22 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:35.710    18:27:22 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:35.710    18:27:22 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:35.711    18:27:22 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:35.711    18:27:22 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:35.711    18:27:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:35.711    18:27:22 rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:35.711    18:27:22 rpc -- scripts/common.sh@345 -- # : 1
00:06:35.711    18:27:22 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:35.711    18:27:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:35.711     18:27:22 rpc -- scripts/common.sh@365 -- # decimal 1
00:06:35.711     18:27:22 rpc -- scripts/common.sh@353 -- # local d=1
00:06:35.711     18:27:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:35.711     18:27:22 rpc -- scripts/common.sh@355 -- # echo 1
00:06:35.711    18:27:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:35.711     18:27:22 rpc -- scripts/common.sh@366 -- # decimal 2
00:06:35.711     18:27:22 rpc -- scripts/common.sh@353 -- # local d=2
00:06:35.711     18:27:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:35.711     18:27:22 rpc -- scripts/common.sh@355 -- # echo 2
00:06:35.711    18:27:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:35.711    18:27:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:35.711    18:27:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:35.711    18:27:22 rpc -- scripts/common.sh@368 -- # return 0
00:06:35.711    18:27:22 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:35.711    18:27:22 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:35.711  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:35.711  		--rc genhtml_branch_coverage=1
00:06:35.711  		--rc genhtml_function_coverage=1
00:06:35.711  		--rc genhtml_legend=1
00:06:35.711  		--rc geninfo_all_blocks=1
00:06:35.711  		--rc geninfo_unexecuted_blocks=1
00:06:35.711  		
00:06:35.711  		'
00:06:35.711    18:27:22 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:35.711  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:35.711  		--rc genhtml_branch_coverage=1
00:06:35.711  		--rc genhtml_function_coverage=1
00:06:35.711  		--rc genhtml_legend=1
00:06:35.711  		--rc geninfo_all_blocks=1
00:06:35.711  		--rc geninfo_unexecuted_blocks=1
00:06:35.711  		
00:06:35.711  		'
00:06:35.711    18:27:22 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:35.711  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:35.711  		--rc genhtml_branch_coverage=1
00:06:35.711  		--rc genhtml_function_coverage=1
00:06:35.711  		--rc genhtml_legend=1
00:06:35.711  		--rc geninfo_all_blocks=1
00:06:35.711  		--rc geninfo_unexecuted_blocks=1
00:06:35.711  		
00:06:35.711  		'
00:06:35.711    18:27:22 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:35.711  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:35.711  		--rc genhtml_branch_coverage=1
00:06:35.711  		--rc genhtml_function_coverage=1
00:06:35.711  		--rc genhtml_legend=1
00:06:35.711  		--rc geninfo_all_blocks=1
00:06:35.711  		--rc geninfo_unexecuted_blocks=1
00:06:35.711  		
00:06:35.711  		'
00:06:35.711   18:27:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=380838
00:06:35.711   18:27:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:06:35.711   18:27:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:35.711   18:27:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 380838
00:06:35.711   18:27:22 rpc -- common/autotest_common.sh@835 -- # '[' -z 380838 ']'
00:06:35.711   18:27:22 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:35.711   18:27:22 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:35.711   18:27:22 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:35.711  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:35.711   18:27:22 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:35.711   18:27:22 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:35.711  [2024-11-17 18:27:22.145312] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:35.711  [2024-11-17 18:27:22.145421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380838 ]
00:06:35.711  [2024-11-17 18:27:22.252362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:35.971  [2024-11-17 18:27:22.294775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:06:35.971  [2024-11-17 18:27:22.294826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 380838' to capture a snapshot of events at runtime.
00:06:35.971  [2024-11-17 18:27:22.294843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:35.971  [2024-11-17 18:27:22.294855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:35.971  [2024-11-17 18:27:22.294872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid380838 for offline analysis/debug.
00:06:35.971  [2024-11-17 18:27:22.295380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:36.539   18:27:23 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:36.539   18:27:23 rpc -- common/autotest_common.sh@868 -- # return 0
00:06:36.539   18:27:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:06:36.539   18:27:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:06:36.539   18:27:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:06:36.539   18:27:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:06:36.539   18:27:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:36.539   18:27:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:36.539   18:27:23 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:36.539  ************************************
00:06:36.539  START TEST rpc_integrity
00:06:36.539  ************************************
00:06:36.539   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:36.539    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:36.539    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.539    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.539    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.539   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:36.539    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:36.539   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:36.539    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:36.539    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.539    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.799    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.799   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:06:36.799    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:36.799    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.799    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.799    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.799   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:36.799  {
00:06:36.799  "name": "Malloc0",
00:06:36.799  "aliases": [
00:06:36.799  "3f1c707f-a1e2-4030-a079-02a5bf6c2547"
00:06:36.799  ],
00:06:36.799  "product_name": "Malloc disk",
00:06:36.799  "block_size": 512,
00:06:36.799  "num_blocks": 16384,
00:06:36.799  "uuid": "3f1c707f-a1e2-4030-a079-02a5bf6c2547",
00:06:36.799  "assigned_rate_limits": {
00:06:36.799  "rw_ios_per_sec": 0,
00:06:36.799  "rw_mbytes_per_sec": 0,
00:06:36.799  "r_mbytes_per_sec": 0,
00:06:36.799  "w_mbytes_per_sec": 0
00:06:36.799  },
00:06:36.799  "claimed": false,
00:06:36.799  "zoned": false,
00:06:36.799  "supported_io_types": {
00:06:36.799  "read": true,
00:06:36.799  "write": true,
00:06:36.799  "unmap": true,
00:06:36.799  "flush": true,
00:06:36.799  "reset": true,
00:06:36.799  "nvme_admin": false,
00:06:36.799  "nvme_io": false,
00:06:36.799  "nvme_io_md": false,
00:06:36.799  "write_zeroes": true,
00:06:36.799  "zcopy": true,
00:06:36.799  "get_zone_info": false,
00:06:36.799  "zone_management": false,
00:06:36.799  "zone_append": false,
00:06:36.799  "compare": false,
00:06:36.799  "compare_and_write": false,
00:06:36.799  "abort": true,
00:06:36.799  "seek_hole": false,
00:06:36.799  "seek_data": false,
00:06:36.799  "copy": true,
00:06:36.799  "nvme_iov_md": false
00:06:36.799  },
00:06:36.799  "memory_domains": [
00:06:36.799  {
00:06:36.799  "dma_device_id": "system",
00:06:36.799  "dma_device_type": 1
00:06:36.799  },
00:06:36.799  {
00:06:36.799  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:36.799  "dma_device_type": 2
00:06:36.799  }
00:06:36.799  ],
00:06:36.799  "driver_specific": {}
00:06:36.799  }
00:06:36.799  ]'
00:06:36.799    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:36.799   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:36.799   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:06:36.799   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.799   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.799  [2024-11-17 18:27:23.172188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:06:36.799  [2024-11-17 18:27:23.172252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:36.799  [2024-11-17 18:27:23.172281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001c580
00:06:36.799  [2024-11-17 18:27:23.172299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:36.799  [2024-11-17 18:27:23.174432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:36.799  [2024-11-17 18:27:23.174459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:36.799  Passthru0
00:06:36.799   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.799    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:36.799    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.799    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.799    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.799   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:36.799  {
00:06:36.799  "name": "Malloc0",
00:06:36.799  "aliases": [
00:06:36.799  "3f1c707f-a1e2-4030-a079-02a5bf6c2547"
00:06:36.799  ],
00:06:36.799  "product_name": "Malloc disk",
00:06:36.799  "block_size": 512,
00:06:36.799  "num_blocks": 16384,
00:06:36.799  "uuid": "3f1c707f-a1e2-4030-a079-02a5bf6c2547",
00:06:36.799  "assigned_rate_limits": {
00:06:36.799  "rw_ios_per_sec": 0,
00:06:36.799  "rw_mbytes_per_sec": 0,
00:06:36.799  "r_mbytes_per_sec": 0,
00:06:36.799  "w_mbytes_per_sec": 0
00:06:36.799  },
00:06:36.799  "claimed": true,
00:06:36.799  "claim_type": "exclusive_write",
00:06:36.799  "zoned": false,
00:06:36.799  "supported_io_types": {
00:06:36.799  "read": true,
00:06:36.799  "write": true,
00:06:36.799  "unmap": true,
00:06:36.799  "flush": true,
00:06:36.799  "reset": true,
00:06:36.799  "nvme_admin": false,
00:06:36.799  "nvme_io": false,
00:06:36.799  "nvme_io_md": false,
00:06:36.799  "write_zeroes": true,
00:06:36.799  "zcopy": true,
00:06:36.799  "get_zone_info": false,
00:06:36.799  "zone_management": false,
00:06:36.799  "zone_append": false,
00:06:36.799  "compare": false,
00:06:36.799  "compare_and_write": false,
00:06:36.799  "abort": true,
00:06:36.799  "seek_hole": false,
00:06:36.799  "seek_data": false,
00:06:36.799  "copy": true,
00:06:36.799  "nvme_iov_md": false
00:06:36.799  },
00:06:36.799  "memory_domains": [
00:06:36.799  {
00:06:36.799  "dma_device_id": "system",
00:06:36.799  "dma_device_type": 1
00:06:36.799  },
00:06:36.799  {
00:06:36.799  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:36.799  "dma_device_type": 2
00:06:36.799  }
00:06:36.799  ],
00:06:36.799  "driver_specific": {}
00:06:36.799  },
00:06:36.799  {
00:06:36.799  "name": "Passthru0",
00:06:36.799  "aliases": [
00:06:36.799  "56d1df3e-9bc4-512e-9b9c-ee9f23d6f581"
00:06:36.799  ],
00:06:36.799  "product_name": "passthru",
00:06:36.799  "block_size": 512,
00:06:36.799  "num_blocks": 16384,
00:06:36.799  "uuid": "56d1df3e-9bc4-512e-9b9c-ee9f23d6f581",
00:06:36.799  "assigned_rate_limits": {
00:06:36.799  "rw_ios_per_sec": 0,
00:06:36.799  "rw_mbytes_per_sec": 0,
00:06:36.799  "r_mbytes_per_sec": 0,
00:06:36.799  "w_mbytes_per_sec": 0
00:06:36.799  },
00:06:36.799  "claimed": false,
00:06:36.799  "zoned": false,
00:06:36.799  "supported_io_types": {
00:06:36.799  "read": true,
00:06:36.799  "write": true,
00:06:36.799  "unmap": true,
00:06:36.799  "flush": true,
00:06:36.799  "reset": true,
00:06:36.799  "nvme_admin": false,
00:06:36.799  "nvme_io": false,
00:06:36.799  "nvme_io_md": false,
00:06:36.799  "write_zeroes": true,
00:06:36.799  "zcopy": true,
00:06:36.799  "get_zone_info": false,
00:06:36.799  "zone_management": false,
00:06:36.799  "zone_append": false,
00:06:36.799  "compare": false,
00:06:36.799  "compare_and_write": false,
00:06:36.799  "abort": true,
00:06:36.799  "seek_hole": false,
00:06:36.799  "seek_data": false,
00:06:36.799  "copy": true,
00:06:36.799  "nvme_iov_md": false
00:06:36.799  },
00:06:36.799  "memory_domains": [
00:06:36.799  {
00:06:36.800  "dma_device_id": "system",
00:06:36.800  "dma_device_type": 1
00:06:36.800  },
00:06:36.800  {
00:06:36.800  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:36.800  "dma_device_type": 2
00:06:36.800  }
00:06:36.800  ],
00:06:36.800  "driver_specific": {
00:06:36.800  "passthru": {
00:06:36.800  "name": "Passthru0",
00:06:36.800  "base_bdev_name": "Malloc0"
00:06:36.800  }
00:06:36.800  }
00:06:36.800  }
00:06:36.800  ]'
00:06:36.800    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:36.800   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:36.800   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.800   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.800    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:36.800    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.800    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.800    18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.800   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:36.800    18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:36.800   18:27:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:36.800  
00:06:36.800  real	0m0.238s
00:06:36.800  user	0m0.151s
00:06:36.800  sys	0m0.028s
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:36.800   18:27:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:36.800  ************************************
00:06:36.800  END TEST rpc_integrity
00:06:36.800  ************************************
00:06:36.800   18:27:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:06:36.800   18:27:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:36.800   18:27:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:36.800   18:27:23 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:36.800  ************************************
00:06:36.800  START TEST rpc_plugins
00:06:36.800  ************************************
00:06:36.800   18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:06:36.800    18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:06:36.800    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.800    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:36.800    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.800   18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:06:36.800    18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:06:36.800    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.800    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:36.800    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:36.800   18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:06:36.800  {
00:06:36.800  "name": "Malloc1",
00:06:36.800  "aliases": [
00:06:36.800  "934b7ded-7004-4a8d-a810-556b3cadcbed"
00:06:36.800  ],
00:06:36.800  "product_name": "Malloc disk",
00:06:36.800  "block_size": 4096,
00:06:36.800  "num_blocks": 256,
00:06:36.800  "uuid": "934b7ded-7004-4a8d-a810-556b3cadcbed",
00:06:36.800  "assigned_rate_limits": {
00:06:36.800  "rw_ios_per_sec": 0,
00:06:36.800  "rw_mbytes_per_sec": 0,
00:06:36.800  "r_mbytes_per_sec": 0,
00:06:36.800  "w_mbytes_per_sec": 0
00:06:36.800  },
00:06:36.800  "claimed": false,
00:06:36.800  "zoned": false,
00:06:36.800  "supported_io_types": {
00:06:36.800  "read": true,
00:06:36.800  "write": true,
00:06:36.800  "unmap": true,
00:06:36.800  "flush": true,
00:06:36.800  "reset": true,
00:06:36.800  "nvme_admin": false,
00:06:36.800  "nvme_io": false,
00:06:36.800  "nvme_io_md": false,
00:06:36.800  "write_zeroes": true,
00:06:36.800  "zcopy": true,
00:06:36.800  "get_zone_info": false,
00:06:36.800  "zone_management": false,
00:06:36.800  "zone_append": false,
00:06:36.800  "compare": false,
00:06:36.800  "compare_and_write": false,
00:06:36.800  "abort": true,
00:06:36.800  "seek_hole": false,
00:06:36.800  "seek_data": false,
00:06:36.800  "copy": true,
00:06:36.800  "nvme_iov_md": false
00:06:36.800  },
00:06:36.800  "memory_domains": [
00:06:36.800  {
00:06:36.800  "dma_device_id": "system",
00:06:36.800  "dma_device_type": 1
00:06:36.800  },
00:06:36.800  {
00:06:36.800  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:36.800  "dma_device_type": 2
00:06:36.800  }
00:06:36.800  ],
00:06:36.800  "driver_specific": {}
00:06:36.800  }
00:06:36.800  ]'
00:06:36.800    18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:06:37.059   18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:06:37.059   18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:06:37.059   18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.059   18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:37.059   18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.059    18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:06:37.059    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.059    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:37.059    18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.059   18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:06:37.059    18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:06:37.059   18:27:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:06:37.059  
00:06:37.059  real	0m0.124s
00:06:37.059  user	0m0.083s
00:06:37.059  sys	0m0.013s
00:06:37.059   18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.059   18:27:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:37.059  ************************************
00:06:37.059  END TEST rpc_plugins
00:06:37.059  ************************************
00:06:37.059   18:27:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:06:37.059   18:27:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:37.059   18:27:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:37.059   18:27:23 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:37.059  ************************************
00:06:37.059  START TEST rpc_trace_cmd_test
00:06:37.059  ************************************
00:06:37.059   18:27:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:06:37.059   18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:06:37.059    18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:06:37.059    18:27:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.059    18:27:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:37.059    18:27:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.059   18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:06:37.059  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid380838",
00:06:37.059  "tpoint_group_mask": "0x8",
00:06:37.059  "iscsi_conn": {
00:06:37.059  "mask": "0x2",
00:06:37.059  "tpoint_mask": "0x0"
00:06:37.059  },
00:06:37.059  "scsi": {
00:06:37.059  "mask": "0x4",
00:06:37.059  "tpoint_mask": "0x0"
00:06:37.059  },
00:06:37.059  "bdev": {
00:06:37.059  "mask": "0x8",
00:06:37.059  "tpoint_mask": "0xffffffffffffffff"
00:06:37.059  },
00:06:37.059  "nvmf_rdma": {
00:06:37.059  "mask": "0x10",
00:06:37.059  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "nvmf_tcp": {
00:06:37.060  "mask": "0x20",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "ftl": {
00:06:37.060  "mask": "0x40",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "blobfs": {
00:06:37.060  "mask": "0x80",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "dsa": {
00:06:37.060  "mask": "0x200",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "thread": {
00:06:37.060  "mask": "0x400",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "nvme_pcie": {
00:06:37.060  "mask": "0x800",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "iaa": {
00:06:37.060  "mask": "0x1000",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "nvme_tcp": {
00:06:37.060  "mask": "0x2000",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "bdev_nvme": {
00:06:37.060  "mask": "0x4000",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "sock": {
00:06:37.060  "mask": "0x8000",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "blob": {
00:06:37.060  "mask": "0x10000",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "bdev_raid": {
00:06:37.060  "mask": "0x20000",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  },
00:06:37.060  "scheduler": {
00:06:37.060  "mask": "0x40000",
00:06:37.060  "tpoint_mask": "0x0"
00:06:37.060  }
00:06:37.060  }'
00:06:37.060    18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:06:37.060   18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:06:37.060    18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:06:37.060   18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:06:37.060    18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:06:37.060   18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:06:37.060    18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:06:37.320   18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:06:37.320    18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:06:37.320   18:27:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:06:37.320  
00:06:37.320  real	0m0.184s
00:06:37.320  user	0m0.166s
00:06:37.320  sys	0m0.008s
00:06:37.320   18:27:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.320   18:27:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:37.320  ************************************
00:06:37.320  END TEST rpc_trace_cmd_test
00:06:37.320  ************************************
00:06:37.320   18:27:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:06:37.320   18:27:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:06:37.320   18:27:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:06:37.320   18:27:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:37.320   18:27:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:37.320   18:27:23 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:37.320  ************************************
00:06:37.320  START TEST rpc_daemon_integrity
00:06:37.320  ************************************
00:06:37.320   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.320   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:37.320   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.320   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.320    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.320   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:37.320  {
00:06:37.320  "name": "Malloc2",
00:06:37.320  "aliases": [
00:06:37.320  "56315501-b157-4f77-80cf-4f11cbac5d3e"
00:06:37.320  ],
00:06:37.320  "product_name": "Malloc disk",
00:06:37.320  "block_size": 512,
00:06:37.320  "num_blocks": 16384,
00:06:37.320  "uuid": "56315501-b157-4f77-80cf-4f11cbac5d3e",
00:06:37.320  "assigned_rate_limits": {
00:06:37.320  "rw_ios_per_sec": 0,
00:06:37.320  "rw_mbytes_per_sec": 0,
00:06:37.320  "r_mbytes_per_sec": 0,
00:06:37.320  "w_mbytes_per_sec": 0
00:06:37.320  },
00:06:37.320  "claimed": false,
00:06:37.320  "zoned": false,
00:06:37.320  "supported_io_types": {
00:06:37.320  "read": true,
00:06:37.320  "write": true,
00:06:37.320  "unmap": true,
00:06:37.320  "flush": true,
00:06:37.320  "reset": true,
00:06:37.320  "nvme_admin": false,
00:06:37.320  "nvme_io": false,
00:06:37.320  "nvme_io_md": false,
00:06:37.320  "write_zeroes": true,
00:06:37.320  "zcopy": true,
00:06:37.321  "get_zone_info": false,
00:06:37.321  "zone_management": false,
00:06:37.321  "zone_append": false,
00:06:37.321  "compare": false,
00:06:37.321  "compare_and_write": false,
00:06:37.321  "abort": true,
00:06:37.321  "seek_hole": false,
00:06:37.321  "seek_data": false,
00:06:37.321  "copy": true,
00:06:37.321  "nvme_iov_md": false
00:06:37.321  },
00:06:37.321  "memory_domains": [
00:06:37.321  {
00:06:37.321  "dma_device_id": "system",
00:06:37.321  "dma_device_type": 1
00:06:37.321  },
00:06:37.321  {
00:06:37.321  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:37.321  "dma_device_type": 2
00:06:37.321  }
00:06:37.321  ],
00:06:37.321  "driver_specific": {}
00:06:37.321  }
00:06:37.321  ]'
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.321  [2024-11-17 18:27:23.821975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:06:37.321  [2024-11-17 18:27:23.822014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:37.321  [2024-11-17 18:27:23.822042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001d780
00:06:37.321  [2024-11-17 18:27:23.822056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:37.321  [2024-11-17 18:27:23.824124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:37.321  [2024-11-17 18:27:23.824152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:37.321  Passthru0
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:37.321  {
00:06:37.321  "name": "Malloc2",
00:06:37.321  "aliases": [
00:06:37.321  "56315501-b157-4f77-80cf-4f11cbac5d3e"
00:06:37.321  ],
00:06:37.321  "product_name": "Malloc disk",
00:06:37.321  "block_size": 512,
00:06:37.321  "num_blocks": 16384,
00:06:37.321  "uuid": "56315501-b157-4f77-80cf-4f11cbac5d3e",
00:06:37.321  "assigned_rate_limits": {
00:06:37.321  "rw_ios_per_sec": 0,
00:06:37.321  "rw_mbytes_per_sec": 0,
00:06:37.321  "r_mbytes_per_sec": 0,
00:06:37.321  "w_mbytes_per_sec": 0
00:06:37.321  },
00:06:37.321  "claimed": true,
00:06:37.321  "claim_type": "exclusive_write",
00:06:37.321  "zoned": false,
00:06:37.321  "supported_io_types": {
00:06:37.321  "read": true,
00:06:37.321  "write": true,
00:06:37.321  "unmap": true,
00:06:37.321  "flush": true,
00:06:37.321  "reset": true,
00:06:37.321  "nvme_admin": false,
00:06:37.321  "nvme_io": false,
00:06:37.321  "nvme_io_md": false,
00:06:37.321  "write_zeroes": true,
00:06:37.321  "zcopy": true,
00:06:37.321  "get_zone_info": false,
00:06:37.321  "zone_management": false,
00:06:37.321  "zone_append": false,
00:06:37.321  "compare": false,
00:06:37.321  "compare_and_write": false,
00:06:37.321  "abort": true,
00:06:37.321  "seek_hole": false,
00:06:37.321  "seek_data": false,
00:06:37.321  "copy": true,
00:06:37.321  "nvme_iov_md": false
00:06:37.321  },
00:06:37.321  "memory_domains": [
00:06:37.321  {
00:06:37.321  "dma_device_id": "system",
00:06:37.321  "dma_device_type": 1
00:06:37.321  },
00:06:37.321  {
00:06:37.321  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:37.321  "dma_device_type": 2
00:06:37.321  }
00:06:37.321  ],
00:06:37.321  "driver_specific": {}
00:06:37.321  },
00:06:37.321  {
00:06:37.321  "name": "Passthru0",
00:06:37.321  "aliases": [
00:06:37.321  "b630d87a-efbf-51a3-a91b-9a8dd2189440"
00:06:37.321  ],
00:06:37.321  "product_name": "passthru",
00:06:37.321  "block_size": 512,
00:06:37.321  "num_blocks": 16384,
00:06:37.321  "uuid": "b630d87a-efbf-51a3-a91b-9a8dd2189440",
00:06:37.321  "assigned_rate_limits": {
00:06:37.321  "rw_ios_per_sec": 0,
00:06:37.321  "rw_mbytes_per_sec": 0,
00:06:37.321  "r_mbytes_per_sec": 0,
00:06:37.321  "w_mbytes_per_sec": 0
00:06:37.321  },
00:06:37.321  "claimed": false,
00:06:37.321  "zoned": false,
00:06:37.321  "supported_io_types": {
00:06:37.321  "read": true,
00:06:37.321  "write": true,
00:06:37.321  "unmap": true,
00:06:37.321  "flush": true,
00:06:37.321  "reset": true,
00:06:37.321  "nvme_admin": false,
00:06:37.321  "nvme_io": false,
00:06:37.321  "nvme_io_md": false,
00:06:37.321  "write_zeroes": true,
00:06:37.321  "zcopy": true,
00:06:37.321  "get_zone_info": false,
00:06:37.321  "zone_management": false,
00:06:37.321  "zone_append": false,
00:06:37.321  "compare": false,
00:06:37.321  "compare_and_write": false,
00:06:37.321  "abort": true,
00:06:37.321  "seek_hole": false,
00:06:37.321  "seek_data": false,
00:06:37.321  "copy": true,
00:06:37.321  "nvme_iov_md": false
00:06:37.321  },
00:06:37.321  "memory_domains": [
00:06:37.321  {
00:06:37.321  "dma_device_id": "system",
00:06:37.321  "dma_device_type": 1
00:06:37.321  },
00:06:37.321  {
00:06:37.321  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:37.321  "dma_device_type": 2
00:06:37.321  }
00:06:37.321  ],
00:06:37.321  "driver_specific": {
00:06:37.321  "passthru": {
00:06:37.321  "name": "Passthru0",
00:06:37.321  "base_bdev_name": "Malloc2"
00:06:37.321  }
00:06:37.321  }
00:06:37.321  }
00:06:37.321  ]'
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.321   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:37.321    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.581    18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:37.581   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:37.581    18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:37.581   18:27:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:37.581  
00:06:37.581  real	0m0.234s
00:06:37.581  user	0m0.151s
00:06:37.581  sys	0m0.028s
00:06:37.581   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.581   18:27:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:37.581  ************************************
00:06:37.581  END TEST rpc_daemon_integrity
00:06:37.581  ************************************
00:06:37.581   18:27:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:06:37.581   18:27:23 rpc -- rpc/rpc.sh@84 -- # killprocess 380838
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@954 -- # '[' -z 380838 ']'
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@958 -- # kill -0 380838
00:06:37.581    18:27:23 rpc -- common/autotest_common.sh@959 -- # uname
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:37.581    18:27:23 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380838
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380838'
00:06:37.581  killing process with pid 380838
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@973 -- # kill 380838
00:06:37.581   18:27:23 rpc -- common/autotest_common.sh@978 -- # wait 380838
00:06:37.840  
00:06:37.840  real	0m2.459s
00:06:37.840  user	0m3.052s
00:06:37.840  sys	0m0.690s
00:06:37.840   18:27:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.840   18:27:24 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:37.840  ************************************
00:06:37.840  END TEST rpc
00:06:37.840  ************************************
00:06:37.840   18:27:24  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:06:38.099   18:27:24  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:38.100   18:27:24  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:38.100   18:27:24  -- common/autotest_common.sh@10 -- # set +x
00:06:38.100  ************************************
00:06:38.100  START TEST skip_rpc
00:06:38.100  ************************************
00:06:38.100   18:27:24 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:06:38.100  * Looking for test storage...
00:06:38.100  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:06:38.100    18:27:24 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:38.100     18:27:24 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:06:38.100     18:27:24 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:38.100    18:27:24 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@345 -- # : 1
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:38.100     18:27:24 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:38.100    18:27:24 skip_rpc -- scripts/common.sh@368 -- # return 0
00:06:38.100    18:27:24 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:38.100    18:27:24 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:38.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.100  		--rc genhtml_branch_coverage=1
00:06:38.100  		--rc genhtml_function_coverage=1
00:06:38.100  		--rc genhtml_legend=1
00:06:38.100  		--rc geninfo_all_blocks=1
00:06:38.100  		--rc geninfo_unexecuted_blocks=1
00:06:38.100  		
00:06:38.100  		'
00:06:38.100    18:27:24 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:38.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.100  		--rc genhtml_branch_coverage=1
00:06:38.100  		--rc genhtml_function_coverage=1
00:06:38.100  		--rc genhtml_legend=1
00:06:38.100  		--rc geninfo_all_blocks=1
00:06:38.100  		--rc geninfo_unexecuted_blocks=1
00:06:38.100  		
00:06:38.100  		'
00:06:38.100    18:27:24 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:38.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.100  		--rc genhtml_branch_coverage=1
00:06:38.100  		--rc genhtml_function_coverage=1
00:06:38.100  		--rc genhtml_legend=1
00:06:38.100  		--rc geninfo_all_blocks=1
00:06:38.100  		--rc geninfo_unexecuted_blocks=1
00:06:38.100  		
00:06:38.100  		'
00:06:38.100    18:27:24 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:38.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.100  		--rc genhtml_branch_coverage=1
00:06:38.100  		--rc genhtml_function_coverage=1
00:06:38.100  		--rc genhtml_legend=1
00:06:38.100  		--rc geninfo_all_blocks=1
00:06:38.100  		--rc geninfo_unexecuted_blocks=1
00:06:38.100  		
00:06:38.100  		'
00:06:38.100   18:27:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:06:38.100   18:27:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:06:38.100   18:27:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:06:38.100   18:27:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:38.100   18:27:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:38.100   18:27:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:38.100  ************************************
00:06:38.100  START TEST skip_rpc
00:06:38.100  ************************************
00:06:38.100   18:27:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:06:38.100   18:27:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=381432
00:06:38.100   18:27:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:06:38.100   18:27:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:38.100   18:27:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:06:38.100  [2024-11-17 18:27:24.655024] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:38.100  [2024-11-17 18:27:24.655162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381432 ]
00:06:38.360  [2024-11-17 18:27:24.756279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:38.360  [2024-11-17 18:27:24.788742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:43.638    18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 381432
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 381432 ']'
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 381432
00:06:43.638    18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:43.638    18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 381432
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 381432'
00:06:43.638  killing process with pid 381432
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 381432
00:06:43.638   18:27:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 381432
00:06:43.638  
00:06:43.638  real	0m5.461s
00:06:43.638  user	0m5.093s
00:06:43.638  sys	0m0.388s
00:06:43.638   18:27:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:43.638   18:27:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:43.638  ************************************
00:06:43.638  END TEST skip_rpc
00:06:43.638  ************************************
00:06:43.638   18:27:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:06:43.638   18:27:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:43.638   18:27:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:43.638   18:27:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:43.638  ************************************
00:06:43.638  START TEST skip_rpc_with_json
00:06:43.638  ************************************
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=382487
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 382487
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 382487 ']'
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:43.638  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:43.638   18:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:43.638  [2024-11-17 18:27:30.175573] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:43.638  [2024-11-17 18:27:30.175683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382487 ]
00:06:43.898  [2024-11-17 18:27:30.288097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:43.898  [2024-11-17 18:27:30.325284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:44.468  [2024-11-17 18:27:31.031018] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:06:44.468  request:
00:06:44.468  {
00:06:44.468  "trtype": "tcp",
00:06:44.468  "method": "nvmf_get_transports",
00:06:44.468  "req_id": 1
00:06:44.468  }
00:06:44.468  Got JSON-RPC error response
00:06:44.468  response:
00:06:44.468  {
00:06:44.468  "code": -19,
00:06:44.468  "message": "No such device"
00:06:44.468  }
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:44.468  [2024-11-17 18:27:31.039179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:44.468   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:06:44.728   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:44.728   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:44.728   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:44.728   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:06:44.728  {
00:06:44.728  "subsystems": [
00:06:44.728  {
00:06:44.728  "subsystem": "fsdev",
00:06:44.728  "config": [
00:06:44.728  {
00:06:44.728  "method": "fsdev_set_opts",
00:06:44.728  "params": {
00:06:44.728  "fsdev_io_pool_size": 65535,
00:06:44.728  "fsdev_io_cache_size": 256
00:06:44.728  }
00:06:44.728  }
00:06:44.728  ]
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "subsystem": "vfio_user_target",
00:06:44.728  "config": null
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "subsystem": "keyring",
00:06:44.728  "config": []
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "subsystem": "iobuf",
00:06:44.728  "config": [
00:06:44.728  {
00:06:44.728  "method": "iobuf_set_options",
00:06:44.728  "params": {
00:06:44.728  "small_pool_count": 8192,
00:06:44.728  "large_pool_count": 1024,
00:06:44.728  "small_bufsize": 8192,
00:06:44.728  "large_bufsize": 135168,
00:06:44.728  "enable_numa": false
00:06:44.728  }
00:06:44.728  }
00:06:44.728  ]
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "subsystem": "sock",
00:06:44.728  "config": [
00:06:44.728  {
00:06:44.728  "method": "sock_set_default_impl",
00:06:44.728  "params": {
00:06:44.728  "impl_name": "posix"
00:06:44.728  }
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "method": "sock_impl_set_options",
00:06:44.728  "params": {
00:06:44.728  "impl_name": "ssl",
00:06:44.728  "recv_buf_size": 4096,
00:06:44.728  "send_buf_size": 4096,
00:06:44.728  "enable_recv_pipe": true,
00:06:44.728  "enable_quickack": false,
00:06:44.728  "enable_placement_id": 0,
00:06:44.728  "enable_zerocopy_send_server": true,
00:06:44.728  "enable_zerocopy_send_client": false,
00:06:44.728  "zerocopy_threshold": 0,
00:06:44.728  "tls_version": 0,
00:06:44.728  "enable_ktls": false
00:06:44.728  }
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "method": "sock_impl_set_options",
00:06:44.728  "params": {
00:06:44.728  "impl_name": "posix",
00:06:44.728  "recv_buf_size": 2097152,
00:06:44.728  "send_buf_size": 2097152,
00:06:44.728  "enable_recv_pipe": true,
00:06:44.728  "enable_quickack": false,
00:06:44.728  "enable_placement_id": 0,
00:06:44.728  "enable_zerocopy_send_server": true,
00:06:44.728  "enable_zerocopy_send_client": false,
00:06:44.728  "zerocopy_threshold": 0,
00:06:44.728  "tls_version": 0,
00:06:44.728  "enable_ktls": false
00:06:44.728  }
00:06:44.728  }
00:06:44.728  ]
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "subsystem": "vmd",
00:06:44.728  "config": []
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "subsystem": "accel",
00:06:44.728  "config": [
00:06:44.728  {
00:06:44.728  "method": "accel_set_options",
00:06:44.728  "params": {
00:06:44.728  "small_cache_size": 128,
00:06:44.728  "large_cache_size": 16,
00:06:44.728  "task_count": 2048,
00:06:44.728  "sequence_count": 2048,
00:06:44.728  "buf_count": 2048
00:06:44.728  }
00:06:44.728  }
00:06:44.728  ]
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "subsystem": "bdev",
00:06:44.728  "config": [
00:06:44.728  {
00:06:44.728  "method": "bdev_set_options",
00:06:44.728  "params": {
00:06:44.728  "bdev_io_pool_size": 65535,
00:06:44.728  "bdev_io_cache_size": 256,
00:06:44.728  "bdev_auto_examine": true,
00:06:44.728  "iobuf_small_cache_size": 128,
00:06:44.728  "iobuf_large_cache_size": 16
00:06:44.728  }
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "method": "bdev_raid_set_options",
00:06:44.728  "params": {
00:06:44.728  "process_window_size_kb": 1024,
00:06:44.728  "process_max_bandwidth_mb_sec": 0
00:06:44.728  }
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "method": "bdev_iscsi_set_options",
00:06:44.728  "params": {
00:06:44.728  "timeout_sec": 30
00:06:44.728  }
00:06:44.728  },
00:06:44.728  {
00:06:44.728  "method": "bdev_nvme_set_options",
00:06:44.728  "params": {
00:06:44.728  "action_on_timeout": "none",
00:06:44.728  "timeout_us": 0,
00:06:44.728  "timeout_admin_us": 0,
00:06:44.728  "keep_alive_timeout_ms": 10000,
00:06:44.729  "arbitration_burst": 0,
00:06:44.729  "low_priority_weight": 0,
00:06:44.729  "medium_priority_weight": 0,
00:06:44.729  "high_priority_weight": 0,
00:06:44.729  "nvme_adminq_poll_period_us": 10000,
00:06:44.729  "nvme_ioq_poll_period_us": 0,
00:06:44.729  "io_queue_requests": 0,
00:06:44.729  "delay_cmd_submit": true,
00:06:44.729  "transport_retry_count": 4,
00:06:44.729  "bdev_retry_count": 3,
00:06:44.729  "transport_ack_timeout": 0,
00:06:44.729  "ctrlr_loss_timeout_sec": 0,
00:06:44.729  "reconnect_delay_sec": 0,
00:06:44.729  "fast_io_fail_timeout_sec": 0,
00:06:44.729  "disable_auto_failback": false,
00:06:44.729  "generate_uuids": false,
00:06:44.729  "transport_tos": 0,
00:06:44.729  "nvme_error_stat": false,
00:06:44.729  "rdma_srq_size": 0,
00:06:44.729  "io_path_stat": false,
00:06:44.729  "allow_accel_sequence": false,
00:06:44.729  "rdma_max_cq_size": 0,
00:06:44.729  "rdma_cm_event_timeout_ms": 0,
00:06:44.729  "dhchap_digests": [
00:06:44.729  "sha256",
00:06:44.729  "sha384",
00:06:44.729  "sha512"
00:06:44.729  ],
00:06:44.729  "dhchap_dhgroups": [
00:06:44.729  "null",
00:06:44.729  "ffdhe2048",
00:06:44.729  "ffdhe3072",
00:06:44.729  "ffdhe4096",
00:06:44.729  "ffdhe6144",
00:06:44.729  "ffdhe8192"
00:06:44.729  ]
00:06:44.729  }
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "method": "bdev_nvme_set_hotplug",
00:06:44.729  "params": {
00:06:44.729  "period_us": 100000,
00:06:44.729  "enable": false
00:06:44.729  }
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "method": "bdev_wait_for_examine"
00:06:44.729  }
00:06:44.729  ]
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "scsi",
00:06:44.729  "config": null
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "scheduler",
00:06:44.729  "config": [
00:06:44.729  {
00:06:44.729  "method": "framework_set_scheduler",
00:06:44.729  "params": {
00:06:44.729  "name": "static"
00:06:44.729  }
00:06:44.729  }
00:06:44.729  ]
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "vhost_scsi",
00:06:44.729  "config": []
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "vhost_blk",
00:06:44.729  "config": []
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "ublk",
00:06:44.729  "config": []
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "nbd",
00:06:44.729  "config": []
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "nvmf",
00:06:44.729  "config": [
00:06:44.729  {
00:06:44.729  "method": "nvmf_set_config",
00:06:44.729  "params": {
00:06:44.729  "discovery_filter": "match_any",
00:06:44.729  "admin_cmd_passthru": {
00:06:44.729  "identify_ctrlr": false
00:06:44.729  },
00:06:44.729  "dhchap_digests": [
00:06:44.729  "sha256",
00:06:44.729  "sha384",
00:06:44.729  "sha512"
00:06:44.729  ],
00:06:44.729  "dhchap_dhgroups": [
00:06:44.729  "null",
00:06:44.729  "ffdhe2048",
00:06:44.729  "ffdhe3072",
00:06:44.729  "ffdhe4096",
00:06:44.729  "ffdhe6144",
00:06:44.729  "ffdhe8192"
00:06:44.729  ]
00:06:44.729  }
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "method": "nvmf_set_max_subsystems",
00:06:44.729  "params": {
00:06:44.729  "max_subsystems": 1024
00:06:44.729  }
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "method": "nvmf_set_crdt",
00:06:44.729  "params": {
00:06:44.729  "crdt1": 0,
00:06:44.729  "crdt2": 0,
00:06:44.729  "crdt3": 0
00:06:44.729  }
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "method": "nvmf_create_transport",
00:06:44.729  "params": {
00:06:44.729  "trtype": "TCP",
00:06:44.729  "max_queue_depth": 128,
00:06:44.729  "max_io_qpairs_per_ctrlr": 127,
00:06:44.729  "in_capsule_data_size": 4096,
00:06:44.729  "max_io_size": 131072,
00:06:44.729  "io_unit_size": 131072,
00:06:44.729  "max_aq_depth": 128,
00:06:44.729  "num_shared_buffers": 511,
00:06:44.729  "buf_cache_size": 4294967295,
00:06:44.729  "dif_insert_or_strip": false,
00:06:44.729  "zcopy": false,
00:06:44.729  "c2h_success": true,
00:06:44.729  "sock_priority": 0,
00:06:44.729  "abort_timeout_sec": 1,
00:06:44.729  "ack_timeout": 0,
00:06:44.729  "data_wr_pool_size": 0
00:06:44.729  }
00:06:44.729  }
00:06:44.729  ]
00:06:44.729  },
00:06:44.729  {
00:06:44.729  "subsystem": "iscsi",
00:06:44.729  "config": [
00:06:44.729  {
00:06:44.729  "method": "iscsi_set_options",
00:06:44.729  "params": {
00:06:44.729  "node_base": "iqn.2016-06.io.spdk",
00:06:44.729  "max_sessions": 128,
00:06:44.729  "max_connections_per_session": 2,
00:06:44.729  "max_queue_depth": 64,
00:06:44.729  "default_time2wait": 2,
00:06:44.729  "default_time2retain": 20,
00:06:44.729  "first_burst_length": 8192,
00:06:44.729  "immediate_data": true,
00:06:44.729  "allow_duplicated_isid": false,
00:06:44.729  "error_recovery_level": 0,
00:06:44.729  "nop_timeout": 60,
00:06:44.729  "nop_in_interval": 30,
00:06:44.729  "disable_chap": false,
00:06:44.729  "require_chap": false,
00:06:44.729  "mutual_chap": false,
00:06:44.729  "chap_group": 0,
00:06:44.729  "max_large_datain_per_connection": 64,
00:06:44.729  "max_r2t_per_connection": 4,
00:06:44.729  "pdu_pool_size": 36864,
00:06:44.729  "immediate_data_pool_size": 16384,
00:06:44.729  "data_out_pool_size": 2048
00:06:44.729  }
00:06:44.729  }
00:06:44.729  ]
00:06:44.729  }
00:06:44.729  ]
00:06:44.729  }
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 382487
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 382487 ']'
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 382487
00:06:44.729    18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:44.729    18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382487
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382487'
00:06:44.729  killing process with pid 382487
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 382487
00:06:44.729   18:27:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 382487
00:06:45.299   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:06:45.299   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=382710
00:06:45.299   18:27:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 382710
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 382710 ']'
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 382710
00:06:50.577    18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:50.577    18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382710
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382710'
00:06:50.577  killing process with pid 382710
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 382710
00:06:50.577   18:27:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 382710
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:06:50.837  
00:06:50.837  real	0m7.086s
00:06:50.837  user	0m6.646s
00:06:50.837  sys	0m0.952s
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:50.837  ************************************
00:06:50.837  END TEST skip_rpc_with_json
00:06:50.837  ************************************
00:06:50.837   18:27:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:06:50.837   18:27:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:50.837   18:27:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:50.837   18:27:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:50.837  ************************************
00:06:50.837  START TEST skip_rpc_with_delay
00:06:50.837  ************************************
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:50.837    18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:50.837    18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:50.837  [2024-11-17 18:27:37.301367] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:50.837  
00:06:50.837  real	0m0.133s
00:06:50.837  user	0m0.081s
00:06:50.837  sys	0m0.051s
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:50.837   18:27:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:06:50.837  ************************************
00:06:50.837  END TEST skip_rpc_with_delay
00:06:50.837  ************************************
00:06:50.837    18:27:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:06:50.837   18:27:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:06:50.837   18:27:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:06:50.837   18:27:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:50.837   18:27:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:50.837   18:27:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:50.837  ************************************
00:06:50.837  START TEST exit_on_failed_rpc_init
00:06:50.837  ************************************
00:06:50.837   18:27:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:06:50.837   18:27:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:50.837   18:27:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=383799
00:06:50.837   18:27:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 383799
00:06:50.837   18:27:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 383799 ']'
00:06:50.837   18:27:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:50.838   18:27:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:50.838   18:27:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:50.838  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:50.838   18:27:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:50.838   18:27:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:51.097  [2024-11-17 18:27:37.474116] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:51.097  [2024-11-17 18:27:37.474251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383799 ]
00:06:51.097  [2024-11-17 18:27:37.575880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:51.097  [2024-11-17 18:27:37.606435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:52.036    18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:52.036    18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:06:52.036   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:06:52.036  [2024-11-17 18:27:38.409415] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:52.036  [2024-11-17 18:27:38.409545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384006 ]
00:06:52.036  [2024-11-17 18:27:38.532558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:52.036  [2024-11-17 18:27:38.571961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:52.036  [2024-11-17 18:27:38.572089] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:06:52.036  [2024-11-17 18:27:38.572120] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:06:52.036  [2024-11-17 18:27:38.572139] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 383799
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 383799 ']'
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 383799
00:06:52.296    18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:52.296    18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383799
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383799'
00:06:52.296  killing process with pid 383799
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 383799
00:06:52.296   18:27:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 383799
00:06:52.864  
00:06:52.864  real	0m1.785s
00:06:52.864  user	0m1.976s
00:06:52.864  sys	0m0.608s
00:06:52.864   18:27:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:52.864   18:27:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:52.864  ************************************
00:06:52.864  END TEST exit_on_failed_rpc_init
00:06:52.864  ************************************
00:06:52.864   18:27:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:06:52.864  
00:06:52.864  real	0m14.756s
00:06:52.864  user	0m13.935s
00:06:52.864  sys	0m2.170s
00:06:52.864   18:27:39 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:52.864   18:27:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:52.864  ************************************
00:06:52.864  END TEST skip_rpc
00:06:52.864  ************************************
00:06:52.864   18:27:39  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:06:52.864   18:27:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:52.864   18:27:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:52.864   18:27:39  -- common/autotest_common.sh@10 -- # set +x
00:06:52.864  ************************************
00:06:52.864  START TEST rpc_client
00:06:52.864  ************************************
00:06:52.864   18:27:39 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:06:52.864  * Looking for test storage...
00:06:52.865  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client
00:06:52.865    18:27:39 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:52.865     18:27:39 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version
00:06:52.865     18:27:39 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:52.865    18:27:39 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@345 -- # : 1
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@353 -- # local d=1
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@355 -- # echo 1
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@353 -- # local d=2
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:52.865     18:27:39 rpc_client -- scripts/common.sh@355 -- # echo 2
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:52.865    18:27:39 rpc_client -- scripts/common.sh@368 -- # return 0
00:06:52.865    18:27:39 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:52.865    18:27:39 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:52.865  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:52.865  		--rc genhtml_branch_coverage=1
00:06:52.865  		--rc genhtml_function_coverage=1
00:06:52.865  		--rc genhtml_legend=1
00:06:52.865  		--rc geninfo_all_blocks=1
00:06:52.865  		--rc geninfo_unexecuted_blocks=1
00:06:52.865  		
00:06:52.865  		'
00:06:52.865    18:27:39 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:52.865  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:52.865  		--rc genhtml_branch_coverage=1
00:06:52.865  		--rc genhtml_function_coverage=1
00:06:52.865  		--rc genhtml_legend=1
00:06:52.865  		--rc geninfo_all_blocks=1
00:06:52.865  		--rc geninfo_unexecuted_blocks=1
00:06:52.865  		
00:06:52.865  		'
00:06:52.865    18:27:39 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:52.865  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:52.865  		--rc genhtml_branch_coverage=1
00:06:52.865  		--rc genhtml_function_coverage=1
00:06:52.865  		--rc genhtml_legend=1
00:06:52.865  		--rc geninfo_all_blocks=1
00:06:52.865  		--rc geninfo_unexecuted_blocks=1
00:06:52.865  		
00:06:52.865  		'
00:06:52.865    18:27:39 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:52.865  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:52.865  		--rc genhtml_branch_coverage=1
00:06:52.865  		--rc genhtml_function_coverage=1
00:06:52.865  		--rc genhtml_legend=1
00:06:52.865  		--rc geninfo_all_blocks=1
00:06:52.865  		--rc geninfo_unexecuted_blocks=1
00:06:52.865  		
00:06:52.865  		'
00:06:52.865   18:27:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:06:52.865  OK
00:06:52.865   18:27:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:06:52.865  
00:06:52.865  real	0m0.167s
00:06:52.865  user	0m0.096s
00:06:52.865  sys	0m0.079s
00:06:52.865   18:27:39 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:52.865   18:27:39 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:06:52.865  ************************************
00:06:52.865  END TEST rpc_client
00:06:52.865  ************************************
00:06:52.865   18:27:39  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:06:52.865   18:27:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:52.865   18:27:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:52.865   18:27:39  -- common/autotest_common.sh@10 -- # set +x
00:06:53.124  ************************************
00:06:53.124  START TEST json_config
00:06:53.124  ************************************
00:06:53.124   18:27:39 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:06:53.124    18:27:39 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:53.124     18:27:39 json_config -- common/autotest_common.sh@1693 -- # lcov --version
00:06:53.124     18:27:39 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:53.124    18:27:39 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:53.124    18:27:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:53.124    18:27:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:53.124    18:27:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:53.124    18:27:39 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:06:53.124    18:27:39 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:06:53.124    18:27:39 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:06:53.124    18:27:39 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:06:53.124    18:27:39 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:06:53.124    18:27:39 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:06:53.124    18:27:39 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:06:53.124    18:27:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:53.124    18:27:39 json_config -- scripts/common.sh@344 -- # case "$op" in
00:06:53.124    18:27:39 json_config -- scripts/common.sh@345 -- # : 1
00:06:53.124    18:27:39 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:53.124    18:27:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:53.124     18:27:39 json_config -- scripts/common.sh@365 -- # decimal 1
00:06:53.124     18:27:39 json_config -- scripts/common.sh@353 -- # local d=1
00:06:53.124     18:27:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:53.124     18:27:39 json_config -- scripts/common.sh@355 -- # echo 1
00:06:53.124    18:27:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:06:53.124     18:27:39 json_config -- scripts/common.sh@366 -- # decimal 2
00:06:53.125     18:27:39 json_config -- scripts/common.sh@353 -- # local d=2
00:06:53.125     18:27:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:53.125     18:27:39 json_config -- scripts/common.sh@355 -- # echo 2
00:06:53.125    18:27:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:06:53.125    18:27:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:53.125    18:27:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:53.125    18:27:39 json_config -- scripts/common.sh@368 -- # return 0
00:06:53.125    18:27:39 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:53.125    18:27:39 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:53.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.125  		--rc genhtml_branch_coverage=1
00:06:53.125  		--rc genhtml_function_coverage=1
00:06:53.125  		--rc genhtml_legend=1
00:06:53.125  		--rc geninfo_all_blocks=1
00:06:53.125  		--rc geninfo_unexecuted_blocks=1
00:06:53.125  		
00:06:53.125  		'
00:06:53.125    18:27:39 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:53.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.125  		--rc genhtml_branch_coverage=1
00:06:53.125  		--rc genhtml_function_coverage=1
00:06:53.125  		--rc genhtml_legend=1
00:06:53.125  		--rc geninfo_all_blocks=1
00:06:53.125  		--rc geninfo_unexecuted_blocks=1
00:06:53.125  		
00:06:53.125  		'
00:06:53.125    18:27:39 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:53.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.125  		--rc genhtml_branch_coverage=1
00:06:53.125  		--rc genhtml_function_coverage=1
00:06:53.125  		--rc genhtml_legend=1
00:06:53.125  		--rc geninfo_all_blocks=1
00:06:53.125  		--rc geninfo_unexecuted_blocks=1
00:06:53.125  		
00:06:53.125  		'
00:06:53.125    18:27:39 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:53.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.125  		--rc genhtml_branch_coverage=1
00:06:53.125  		--rc genhtml_function_coverage=1
00:06:53.125  		--rc genhtml_legend=1
00:06:53.125  		--rc geninfo_all_blocks=1
00:06:53.125  		--rc geninfo_unexecuted_blocks=1
00:06:53.125  		
00:06:53.125  		'
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:06:53.125     18:27:39 json_config -- nvmf/common.sh@7 -- # uname -s
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:53.125     18:27:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:06:53.125     18:27:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:06:53.125     18:27:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:53.125     18:27:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:53.125     18:27:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:53.125      18:27:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.125      18:27:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.125      18:27:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.125      18:27:39 json_config -- paths/export.sh@5 -- # export PATH
00:06:53.125      18:27:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@51 -- # : 0
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:53.125  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:53.125    18:27:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:06:53.125  WARNING: No tests are enabled so not running JSON configuration tests
00:06:53.125   18:27:39 json_config -- json_config/json_config.sh@28 -- # exit 0
00:06:53.125  
00:06:53.125  real	0m0.120s
00:06:53.125  user	0m0.080s
00:06:53.125  sys	0m0.043s
00:06:53.125   18:27:39 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:53.125   18:27:39 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:53.125  ************************************
00:06:53.125  END TEST json_config
00:06:53.125  ************************************
00:06:53.125   18:27:39  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:06:53.125   18:27:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:53.125   18:27:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:53.125   18:27:39  -- common/autotest_common.sh@10 -- # set +x
00:06:53.125  ************************************
00:06:53.125  START TEST json_config_extra_key
00:06:53.125  ************************************
00:06:53.125   18:27:39 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:06:53.125    18:27:39 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:53.125     18:27:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version
00:06:53.125     18:27:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:53.385    18:27:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:53.385    18:27:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:53.385     18:27:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:06:53.386    18:27:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:06:53.386    18:27:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:06:53.386    18:27:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:53.386    18:27:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:53.386    18:27:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:06:53.386    18:27:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:53.386    18:27:39 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:53.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.386  		--rc genhtml_branch_coverage=1
00:06:53.386  		--rc genhtml_function_coverage=1
00:06:53.386  		--rc genhtml_legend=1
00:06:53.386  		--rc geninfo_all_blocks=1
00:06:53.386  		--rc geninfo_unexecuted_blocks=1
00:06:53.386  		
00:06:53.386  		'
00:06:53.386    18:27:39 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:53.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.386  		--rc genhtml_branch_coverage=1
00:06:53.386  		--rc genhtml_function_coverage=1
00:06:53.386  		--rc genhtml_legend=1
00:06:53.386  		--rc geninfo_all_blocks=1
00:06:53.386  		--rc geninfo_unexecuted_blocks=1
00:06:53.386  		
00:06:53.386  		'
00:06:53.386    18:27:39 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:53.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.386  		--rc genhtml_branch_coverage=1
00:06:53.386  		--rc genhtml_function_coverage=1
00:06:53.386  		--rc genhtml_legend=1
00:06:53.386  		--rc geninfo_all_blocks=1
00:06:53.386  		--rc geninfo_unexecuted_blocks=1
00:06:53.386  		
00:06:53.386  		'
00:06:53.386    18:27:39 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:53.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.386  		--rc genhtml_branch_coverage=1
00:06:53.386  		--rc genhtml_function_coverage=1
00:06:53.386  		--rc genhtml_legend=1
00:06:53.386  		--rc geninfo_all_blocks=1
00:06:53.386  		--rc geninfo_unexecuted_blocks=1
00:06:53.386  		
00:06:53.386  		'
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:06:53.386     18:27:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:53.386     18:27:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:53.386     18:27:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:53.386      18:27:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.386      18:27:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.386      18:27:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.386      18:27:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:06:53.386      18:27:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:53.386  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:53.386    18:27:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json')
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:06:53.386  INFO: launching applications...
00:06:53.386   18:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=384386
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:06:53.386  Waiting for target to run...
00:06:53.386   18:27:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 384386 /var/tmp/spdk_tgt.sock
00:06:53.386   18:27:39 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 384386 ']'
00:06:53.386   18:27:39 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:06:53.386   18:27:39 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:53.386   18:27:39 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:06:53.386  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:06:53.387   18:27:39 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:53.387   18:27:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:06:53.387  [2024-11-17 18:27:39.837073] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:53.387  [2024-11-17 18:27:39.837223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384386 ]
00:06:53.955  [2024-11-17 18:27:40.301247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:53.955  [2024-11-17 18:27:40.340827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:54.216   18:27:40 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:54.216   18:27:40 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:06:54.216  
00:06:54.216   18:27:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:06:54.216  INFO: shutting down applications...
00:06:54.216   18:27:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 384386 ]]
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 384386
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 384386
00:06:54.216   18:27:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:54.785   18:27:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:54.785   18:27:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:54.785   18:27:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 384386
00:06:54.785   18:27:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:55.355   18:27:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:55.355   18:27:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:55.355   18:27:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 384386
00:06:55.355   18:27:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:06:55.355   18:27:41 json_config_extra_key -- json_config/common.sh@43 -- # break
00:06:55.355   18:27:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:06:55.355   18:27:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:06:55.355  SPDK target shutdown done
00:06:55.355   18:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:06:55.355  Success
00:06:55.355  
00:06:55.355  real	0m2.138s
00:06:55.355  user	0m1.517s
00:06:55.355  sys	0m0.621s
00:06:55.355   18:27:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.355   18:27:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:06:55.355  ************************************
00:06:55.355  END TEST json_config_extra_key
00:06:55.355  ************************************
00:06:55.355   18:27:41  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:06:55.355   18:27:41  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:55.355   18:27:41  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.355   18:27:41  -- common/autotest_common.sh@10 -- # set +x
00:06:55.355  ************************************
00:06:55.355  START TEST alias_rpc
00:06:55.355  ************************************
00:06:55.355   18:27:41 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:06:55.355  * Looking for test storage...
00:06:55.355  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc
00:06:55.355    18:27:41 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:55.355     18:27:41 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:06:55.355     18:27:41 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:55.355    18:27:41 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@345 -- # : 1
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:55.355     18:27:41 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:55.355    18:27:41 alias_rpc -- scripts/common.sh@368 -- # return 0
00:06:55.355    18:27:41 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:55.355    18:27:41 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:55.355  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.355  		--rc genhtml_branch_coverage=1
00:06:55.355  		--rc genhtml_function_coverage=1
00:06:55.355  		--rc genhtml_legend=1
00:06:55.355  		--rc geninfo_all_blocks=1
00:06:55.355  		--rc geninfo_unexecuted_blocks=1
00:06:55.355  		
00:06:55.355  		'
00:06:55.355    18:27:41 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:55.355  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.355  		--rc genhtml_branch_coverage=1
00:06:55.355  		--rc genhtml_function_coverage=1
00:06:55.355  		--rc genhtml_legend=1
00:06:55.355  		--rc geninfo_all_blocks=1
00:06:55.355  		--rc geninfo_unexecuted_blocks=1
00:06:55.355  		
00:06:55.355  		'
00:06:55.355    18:27:41 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:55.355  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.355  		--rc genhtml_branch_coverage=1
00:06:55.355  		--rc genhtml_function_coverage=1
00:06:55.355  		--rc genhtml_legend=1
00:06:55.355  		--rc geninfo_all_blocks=1
00:06:55.355  		--rc geninfo_unexecuted_blocks=1
00:06:55.355  		
00:06:55.355  		'
00:06:55.355    18:27:41 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:55.355  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.355  		--rc genhtml_branch_coverage=1
00:06:55.355  		--rc genhtml_function_coverage=1
00:06:55.355  		--rc genhtml_legend=1
00:06:55.355  		--rc geninfo_all_blocks=1
00:06:55.355  		--rc geninfo_unexecuted_blocks=1
00:06:55.355  		
00:06:55.355  		'
00:06:55.355   18:27:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:06:55.355   18:27:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:55.355   18:27:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=384854
00:06:55.355   18:27:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 384854
00:06:55.355   18:27:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 384854 ']'
00:06:55.355   18:27:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:55.355   18:27:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:55.355   18:27:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:55.355  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:55.355   18:27:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:55.355   18:27:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:55.615  [2024-11-17 18:27:41.994129] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:55.615  [2024-11-17 18:27:41.994307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384854 ]
00:06:55.615  [2024-11-17 18:27:42.102822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:55.615  [2024-11-17 18:27:42.137115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.555   18:27:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:56.555   18:27:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:56.555   18:27:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py load_config -i
00:06:56.555   18:27:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 384854
00:06:56.555   18:27:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 384854 ']'
00:06:56.555   18:27:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 384854
00:06:56.555    18:27:43 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:06:56.555   18:27:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:56.555    18:27:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384854
00:06:56.814   18:27:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:56.815   18:27:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:56.815   18:27:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384854'
00:06:56.815  killing process with pid 384854
00:06:56.815   18:27:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 384854
00:06:56.815   18:27:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 384854
00:06:57.074  
00:06:57.074  real	0m1.818s
00:06:57.074  user	0m1.952s
00:06:57.074  sys	0m0.568s
00:06:57.074   18:27:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:57.074   18:27:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:57.074  ************************************
00:06:57.074  END TEST alias_rpc
00:06:57.074  ************************************
00:06:57.074   18:27:43  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:06:57.074   18:27:43  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:06:57.074   18:27:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:57.074   18:27:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:57.074   18:27:43  -- common/autotest_common.sh@10 -- # set +x
00:06:57.335  ************************************
00:06:57.335  START TEST spdkcli_tcp
00:06:57.335  ************************************
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:06:57.335  * Looking for test storage...
00:06:57.335  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli
00:06:57.335    18:27:43 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:57.335     18:27:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:06:57.335     18:27:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:57.335    18:27:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:57.335     18:27:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:57.335    18:27:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:06:57.335    18:27:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:57.335    18:27:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:57.335  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.335  		--rc genhtml_branch_coverage=1
00:06:57.335  		--rc genhtml_function_coverage=1
00:06:57.335  		--rc genhtml_legend=1
00:06:57.335  		--rc geninfo_all_blocks=1
00:06:57.335  		--rc geninfo_unexecuted_blocks=1
00:06:57.335  		
00:06:57.335  		'
00:06:57.335    18:27:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:57.335  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.335  		--rc genhtml_branch_coverage=1
00:06:57.335  		--rc genhtml_function_coverage=1
00:06:57.335  		--rc genhtml_legend=1
00:06:57.335  		--rc geninfo_all_blocks=1
00:06:57.335  		--rc geninfo_unexecuted_blocks=1
00:06:57.335  		
00:06:57.335  		'
00:06:57.335    18:27:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:57.335  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.335  		--rc genhtml_branch_coverage=1
00:06:57.335  		--rc genhtml_function_coverage=1
00:06:57.335  		--rc genhtml_legend=1
00:06:57.335  		--rc geninfo_all_blocks=1
00:06:57.335  		--rc geninfo_unexecuted_blocks=1
00:06:57.335  		
00:06:57.335  		'
00:06:57.335    18:27:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:57.335  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.335  		--rc genhtml_branch_coverage=1
00:06:57.335  		--rc genhtml_function_coverage=1
00:06:57.335  		--rc genhtml_legend=1
00:06:57.335  		--rc geninfo_all_blocks=1
00:06:57.335  		--rc geninfo_unexecuted_blocks=1
00:06:57.335  		
00:06:57.335  		'
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/common.sh
00:06:57.335    18:27:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:06:57.335    18:27:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/clear_config.py
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=385133
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:06:57.335   18:27:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 385133
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 385133 ']'
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:57.335  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:57.335   18:27:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:57.335  [2024-11-17 18:27:43.864773] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:57.335  [2024-11-17 18:27:43.864916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385133 ]
00:06:57.595  [2024-11-17 18:27:43.974519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:57.595  [2024-11-17 18:27:44.011007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:57.595  [2024-11-17 18:27:44.011042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:58.165   18:27:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:58.165   18:27:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:06:58.165   18:27:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:06:58.165   18:27:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=385342
00:06:58.165   18:27:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:06:58.426  [
00:06:58.426    "bdev_malloc_delete",
00:06:58.426    "bdev_malloc_create",
00:06:58.426    "bdev_null_resize",
00:06:58.426    "bdev_null_delete",
00:06:58.426    "bdev_null_create",
00:06:58.426    "bdev_nvme_cuse_unregister",
00:06:58.426    "bdev_nvme_cuse_register",
00:06:58.426    "bdev_opal_new_user",
00:06:58.426    "bdev_opal_set_lock_state",
00:06:58.426    "bdev_opal_delete",
00:06:58.426    "bdev_opal_get_info",
00:06:58.426    "bdev_opal_create",
00:06:58.426    "bdev_nvme_opal_revert",
00:06:58.426    "bdev_nvme_opal_init",
00:06:58.426    "bdev_nvme_send_cmd",
00:06:58.426    "bdev_nvme_set_keys",
00:06:58.426    "bdev_nvme_get_path_iostat",
00:06:58.426    "bdev_nvme_get_mdns_discovery_info",
00:06:58.426    "bdev_nvme_stop_mdns_discovery",
00:06:58.426    "bdev_nvme_start_mdns_discovery",
00:06:58.426    "bdev_nvme_set_multipath_policy",
00:06:58.426    "bdev_nvme_set_preferred_path",
00:06:58.426    "bdev_nvme_get_io_paths",
00:06:58.426    "bdev_nvme_remove_error_injection",
00:06:58.426    "bdev_nvme_add_error_injection",
00:06:58.426    "bdev_nvme_get_discovery_info",
00:06:58.426    "bdev_nvme_stop_discovery",
00:06:58.426    "bdev_nvme_start_discovery",
00:06:58.426    "bdev_nvme_get_controller_health_info",
00:06:58.426    "bdev_nvme_disable_controller",
00:06:58.426    "bdev_nvme_enable_controller",
00:06:58.426    "bdev_nvme_reset_controller",
00:06:58.426    "bdev_nvme_get_transport_statistics",
00:06:58.426    "bdev_nvme_apply_firmware",
00:06:58.426    "bdev_nvme_detach_controller",
00:06:58.426    "bdev_nvme_get_controllers",
00:06:58.426    "bdev_nvme_attach_controller",
00:06:58.426    "bdev_nvme_set_hotplug",
00:06:58.426    "bdev_nvme_set_options",
00:06:58.426    "bdev_passthru_delete",
00:06:58.426    "bdev_passthru_create",
00:06:58.426    "bdev_lvol_set_parent_bdev",
00:06:58.426    "bdev_lvol_set_parent",
00:06:58.426    "bdev_lvol_check_shallow_copy",
00:06:58.426    "bdev_lvol_start_shallow_copy",
00:06:58.426    "bdev_lvol_grow_lvstore",
00:06:58.426    "bdev_lvol_get_lvols",
00:06:58.426    "bdev_lvol_get_lvstores",
00:06:58.426    "bdev_lvol_delete",
00:06:58.426    "bdev_lvol_set_read_only",
00:06:58.426    "bdev_lvol_resize",
00:06:58.426    "bdev_lvol_decouple_parent",
00:06:58.426    "bdev_lvol_inflate",
00:06:58.426    "bdev_lvol_rename",
00:06:58.426    "bdev_lvol_clone_bdev",
00:06:58.426    "bdev_lvol_clone",
00:06:58.426    "bdev_lvol_snapshot",
00:06:58.426    "bdev_lvol_create",
00:06:58.426    "bdev_lvol_delete_lvstore",
00:06:58.426    "bdev_lvol_rename_lvstore",
00:06:58.426    "bdev_lvol_create_lvstore",
00:06:58.426    "bdev_raid_set_options",
00:06:58.426    "bdev_raid_remove_base_bdev",
00:06:58.426    "bdev_raid_add_base_bdev",
00:06:58.426    "bdev_raid_delete",
00:06:58.426    "bdev_raid_create",
00:06:58.426    "bdev_raid_get_bdevs",
00:06:58.426    "bdev_error_inject_error",
00:06:58.426    "bdev_error_delete",
00:06:58.426    "bdev_error_create",
00:06:58.426    "bdev_split_delete",
00:06:58.426    "bdev_split_create",
00:06:58.426    "bdev_delay_delete",
00:06:58.426    "bdev_delay_create",
00:06:58.426    "bdev_delay_update_latency",
00:06:58.426    "bdev_zone_block_delete",
00:06:58.426    "bdev_zone_block_create",
00:06:58.426    "blobfs_create",
00:06:58.426    "blobfs_detect",
00:06:58.426    "blobfs_set_cache_size",
00:06:58.426    "bdev_crypto_delete",
00:06:58.426    "bdev_crypto_create",
00:06:58.426    "bdev_aio_delete",
00:06:58.426    "bdev_aio_rescan",
00:06:58.426    "bdev_aio_create",
00:06:58.426    "bdev_ftl_set_property",
00:06:58.426    "bdev_ftl_get_properties",
00:06:58.426    "bdev_ftl_get_stats",
00:06:58.426    "bdev_ftl_unmap",
00:06:58.426    "bdev_ftl_unload",
00:06:58.426    "bdev_ftl_delete",
00:06:58.426    "bdev_ftl_load",
00:06:58.426    "bdev_ftl_create",
00:06:58.426    "bdev_virtio_attach_controller",
00:06:58.426    "bdev_virtio_scsi_get_devices",
00:06:58.426    "bdev_virtio_detach_controller",
00:06:58.426    "bdev_virtio_blk_set_hotplug",
00:06:58.426    "bdev_iscsi_delete",
00:06:58.426    "bdev_iscsi_create",
00:06:58.426    "bdev_iscsi_set_options",
00:06:58.426    "accel_error_inject_error",
00:06:58.426    "ioat_scan_accel_module",
00:06:58.426    "dsa_scan_accel_module",
00:06:58.426    "iaa_scan_accel_module",
00:06:58.426    "dpdk_cryptodev_get_driver",
00:06:58.426    "dpdk_cryptodev_set_driver",
00:06:58.426    "dpdk_cryptodev_scan_accel_module",
00:06:58.426    "vfu_virtio_create_fs_endpoint",
00:06:58.426    "vfu_virtio_create_scsi_endpoint",
00:06:58.426    "vfu_virtio_scsi_remove_target",
00:06:58.426    "vfu_virtio_scsi_add_target",
00:06:58.426    "vfu_virtio_create_blk_endpoint",
00:06:58.426    "vfu_virtio_delete_endpoint",
00:06:58.426    "keyring_file_remove_key",
00:06:58.426    "keyring_file_add_key",
00:06:58.426    "keyring_linux_set_options",
00:06:58.426    "fsdev_aio_delete",
00:06:58.426    "fsdev_aio_create",
00:06:58.426    "iscsi_get_histogram",
00:06:58.426    "iscsi_enable_histogram",
00:06:58.426    "iscsi_set_options",
00:06:58.426    "iscsi_get_auth_groups",
00:06:58.426    "iscsi_auth_group_remove_secret",
00:06:58.426    "iscsi_auth_group_add_secret",
00:06:58.426    "iscsi_delete_auth_group",
00:06:58.426    "iscsi_create_auth_group",
00:06:58.426    "iscsi_set_discovery_auth",
00:06:58.426    "iscsi_get_options",
00:06:58.426    "iscsi_target_node_request_logout",
00:06:58.426    "iscsi_target_node_set_redirect",
00:06:58.426    "iscsi_target_node_set_auth",
00:06:58.426    "iscsi_target_node_add_lun",
00:06:58.426    "iscsi_get_stats",
00:06:58.426    "iscsi_get_connections",
00:06:58.426    "iscsi_portal_group_set_auth",
00:06:58.426    "iscsi_start_portal_group",
00:06:58.426    "iscsi_delete_portal_group",
00:06:58.426    "iscsi_create_portal_group",
00:06:58.426    "iscsi_get_portal_groups",
00:06:58.426    "iscsi_delete_target_node",
00:06:58.426    "iscsi_target_node_remove_pg_ig_maps",
00:06:58.426    "iscsi_target_node_add_pg_ig_maps",
00:06:58.426    "iscsi_create_target_node",
00:06:58.426    "iscsi_get_target_nodes",
00:06:58.426    "iscsi_delete_initiator_group",
00:06:58.426    "iscsi_initiator_group_remove_initiators",
00:06:58.426    "iscsi_initiator_group_add_initiators",
00:06:58.426    "iscsi_create_initiator_group",
00:06:58.426    "iscsi_get_initiator_groups",
00:06:58.426    "nvmf_set_crdt",
00:06:58.427    "nvmf_set_config",
00:06:58.427    "nvmf_set_max_subsystems",
00:06:58.427    "nvmf_stop_mdns_prr",
00:06:58.427    "nvmf_publish_mdns_prr",
00:06:58.427    "nvmf_subsystem_get_listeners",
00:06:58.427    "nvmf_subsystem_get_qpairs",
00:06:58.427    "nvmf_subsystem_get_controllers",
00:06:58.427    "nvmf_get_stats",
00:06:58.427    "nvmf_get_transports",
00:06:58.427    "nvmf_create_transport",
00:06:58.427    "nvmf_get_targets",
00:06:58.427    "nvmf_delete_target",
00:06:58.427    "nvmf_create_target",
00:06:58.427    "nvmf_subsystem_allow_any_host",
00:06:58.427    "nvmf_subsystem_set_keys",
00:06:58.427    "nvmf_subsystem_remove_host",
00:06:58.427    "nvmf_subsystem_add_host",
00:06:58.427    "nvmf_ns_remove_host",
00:06:58.427    "nvmf_ns_add_host",
00:06:58.427    "nvmf_subsystem_remove_ns",
00:06:58.427    "nvmf_subsystem_set_ns_ana_group",
00:06:58.427    "nvmf_subsystem_add_ns",
00:06:58.427    "nvmf_subsystem_listener_set_ana_state",
00:06:58.427    "nvmf_discovery_get_referrals",
00:06:58.427    "nvmf_discovery_remove_referral",
00:06:58.427    "nvmf_discovery_add_referral",
00:06:58.427    "nvmf_subsystem_remove_listener",
00:06:58.427    "nvmf_subsystem_add_listener",
00:06:58.427    "nvmf_delete_subsystem",
00:06:58.427    "nvmf_create_subsystem",
00:06:58.427    "nvmf_get_subsystems",
00:06:58.427    "env_dpdk_get_mem_stats",
00:06:58.427    "nbd_get_disks",
00:06:58.427    "nbd_stop_disk",
00:06:58.427    "nbd_start_disk",
00:06:58.427    "ublk_recover_disk",
00:06:58.427    "ublk_get_disks",
00:06:58.427    "ublk_stop_disk",
00:06:58.427    "ublk_start_disk",
00:06:58.427    "ublk_destroy_target",
00:06:58.427    "ublk_create_target",
00:06:58.427    "virtio_blk_create_transport",
00:06:58.427    "virtio_blk_get_transports",
00:06:58.427    "vhost_controller_set_coalescing",
00:06:58.427    "vhost_get_controllers",
00:06:58.427    "vhost_delete_controller",
00:06:58.427    "vhost_create_blk_controller",
00:06:58.427    "vhost_scsi_controller_remove_target",
00:06:58.427    "vhost_scsi_controller_add_target",
00:06:58.427    "vhost_start_scsi_controller",
00:06:58.427    "vhost_create_scsi_controller",
00:06:58.427    "thread_set_cpumask",
00:06:58.427    "scheduler_set_options",
00:06:58.427    "framework_get_governor",
00:06:58.427    "framework_get_scheduler",
00:06:58.427    "framework_set_scheduler",
00:06:58.427    "framework_get_reactors",
00:06:58.427    "thread_get_io_channels",
00:06:58.427    "thread_get_pollers",
00:06:58.427    "thread_get_stats",
00:06:58.427    "framework_monitor_context_switch",
00:06:58.427    "spdk_kill_instance",
00:06:58.427    "log_enable_timestamps",
00:06:58.427    "log_get_flags",
00:06:58.427    "log_clear_flag",
00:06:58.427    "log_set_flag",
00:06:58.427    "log_get_level",
00:06:58.427    "log_set_level",
00:06:58.427    "log_get_print_level",
00:06:58.427    "log_set_print_level",
00:06:58.427    "framework_enable_cpumask_locks",
00:06:58.427    "framework_disable_cpumask_locks",
00:06:58.427    "framework_wait_init",
00:06:58.427    "framework_start_init",
00:06:58.427    "scsi_get_devices",
00:06:58.427    "bdev_get_histogram",
00:06:58.427    "bdev_enable_histogram",
00:06:58.427    "bdev_set_qos_limit",
00:06:58.427    "bdev_set_qd_sampling_period",
00:06:58.427    "bdev_get_bdevs",
00:06:58.427    "bdev_reset_iostat",
00:06:58.427    "bdev_get_iostat",
00:06:58.427    "bdev_examine",
00:06:58.427    "bdev_wait_for_examine",
00:06:58.427    "bdev_set_options",
00:06:58.427    "accel_get_stats",
00:06:58.427    "accel_set_options",
00:06:58.427    "accel_set_driver",
00:06:58.427    "accel_crypto_key_destroy",
00:06:58.427    "accel_crypto_keys_get",
00:06:58.427    "accel_crypto_key_create",
00:06:58.427    "accel_assign_opc",
00:06:58.427    "accel_get_module_info",
00:06:58.427    "accel_get_opc_assignments",
00:06:58.427    "vmd_rescan",
00:06:58.427    "vmd_remove_device",
00:06:58.427    "vmd_enable",
00:06:58.427    "sock_get_default_impl",
00:06:58.427    "sock_set_default_impl",
00:06:58.427    "sock_impl_set_options",
00:06:58.427    "sock_impl_get_options",
00:06:58.427    "iobuf_get_stats",
00:06:58.427    "iobuf_set_options",
00:06:58.427    "keyring_get_keys",
00:06:58.427    "vfu_tgt_set_base_path",
00:06:58.427    "framework_get_pci_devices",
00:06:58.427    "framework_get_config",
00:06:58.427    "framework_get_subsystems",
00:06:58.427    "fsdev_set_opts",
00:06:58.427    "fsdev_get_opts",
00:06:58.427    "trace_get_info",
00:06:58.427    "trace_get_tpoint_group_mask",
00:06:58.427    "trace_disable_tpoint_group",
00:06:58.427    "trace_enable_tpoint_group",
00:06:58.427    "trace_clear_tpoint_mask",
00:06:58.427    "trace_set_tpoint_mask",
00:06:58.427    "notify_get_notifications",
00:06:58.427    "notify_get_types",
00:06:58.427    "spdk_get_version",
00:06:58.427    "rpc_get_methods"
00:06:58.427  ]
00:06:58.427   18:27:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:58.427   18:27:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:06:58.427   18:27:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 385133
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 385133 ']'
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 385133
00:06:58.427    18:27:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:58.427    18:27:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 385133
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 385133'
00:06:58.427  killing process with pid 385133
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 385133
00:06:58.427   18:27:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 385133
00:06:58.997  
00:06:58.997  real	0m1.759s
00:06:58.997  user	0m3.204s
00:06:58.997  sys	0m0.539s
00:06:58.997   18:27:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.997   18:27:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:58.997  ************************************
00:06:58.997  END TEST spdkcli_tcp
00:06:58.997  ************************************
00:06:58.997   18:27:45  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:06:58.997   18:27:45  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:58.997   18:27:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.997   18:27:45  -- common/autotest_common.sh@10 -- # set +x
00:06:58.997  ************************************
00:06:58.997  START TEST dpdk_mem_utility
00:06:58.997  ************************************
00:06:58.997   18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:06:58.997  * Looking for test storage...
00:06:58.997  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility
00:06:58.997    18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:58.997     18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version
00:06:58.997     18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:58.997    18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:58.997    18:27:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:58.997     18:27:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:06:58.997     18:27:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:06:58.997     18:27:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:58.997     18:27:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:06:58.998    18:27:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:06:58.998     18:27:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:06:58.998     18:27:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:06:58.998     18:27:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:58.998     18:27:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:06:58.998    18:27:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:06:58.998    18:27:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:58.998    18:27:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:58.998    18:27:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:06:58.998    18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:58.998    18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:58.998  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.998  		--rc genhtml_branch_coverage=1
00:06:58.998  		--rc genhtml_function_coverage=1
00:06:58.998  		--rc genhtml_legend=1
00:06:58.998  		--rc geninfo_all_blocks=1
00:06:58.998  		--rc geninfo_unexecuted_blocks=1
00:06:58.998  		
00:06:58.998  		'
00:06:58.998    18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:58.998  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.998  		--rc genhtml_branch_coverage=1
00:06:58.998  		--rc genhtml_function_coverage=1
00:06:58.998  		--rc genhtml_legend=1
00:06:58.998  		--rc geninfo_all_blocks=1
00:06:58.998  		--rc geninfo_unexecuted_blocks=1
00:06:58.998  		
00:06:58.998  		'
00:06:58.998    18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:58.998  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.998  		--rc genhtml_branch_coverage=1
00:06:58.998  		--rc genhtml_function_coverage=1
00:06:58.998  		--rc genhtml_legend=1
00:06:58.998  		--rc geninfo_all_blocks=1
00:06:58.998  		--rc geninfo_unexecuted_blocks=1
00:06:58.998  		
00:06:58.998  		'
00:06:58.998    18:27:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:58.998  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.998  		--rc genhtml_branch_coverage=1
00:06:58.998  		--rc genhtml_function_coverage=1
00:06:58.998  		--rc genhtml_legend=1
00:06:58.998  		--rc geninfo_all_blocks=1
00:06:58.998  		--rc geninfo_unexecuted_blocks=1
00:06:58.998  		
00:06:58.998  		'
00:06:58.998   18:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:06:58.998   18:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=385619
00:06:58.998   18:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 385619
00:06:58.998   18:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:06:58.998   18:27:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 385619 ']'
00:06:58.998   18:27:45 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:58.998   18:27:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:58.998   18:27:45 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:58.998  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:58.998   18:27:45 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:58.998   18:27:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:06:59.257  [2024-11-17 18:27:45.663223] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:06:59.257  [2024-11-17 18:27:45.663333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385619 ]
00:06:59.258  [2024-11-17 18:27:45.766781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:59.258  [2024-11-17 18:27:45.803256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:00.198   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:00.198   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:07:00.198   18:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:07:00.198   18:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:07:00.198   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.198   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:00.198  {
00:07:00.198  "filename": "/tmp/spdk_mem_dump.txt"
00:07:00.198  }
00:07:00.198   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.198   18:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:07:00.198  DPDK memory size 810.000000 MiB in 1 heap(s)
00:07:00.198  1 heaps totaling size 810.000000 MiB
00:07:00.198    size:  810.000000 MiB heap id: 0
00:07:00.198  end heaps----------
00:07:00.198  9 mempools totaling size 595.772034 MiB
00:07:00.198    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:07:00.198    size:  158.602051 MiB name: PDU_data_out_Pool
00:07:00.198    size:   92.545471 MiB name: bdev_io_385619
00:07:00.198    size:   50.003479 MiB name: msgpool_385619
00:07:00.198    size:   36.509338 MiB name: fsdev_io_385619
00:07:00.198    size:   21.763794 MiB name: PDU_Pool
00:07:00.198    size:   19.513306 MiB name: SCSI_TASK_Pool
00:07:00.198    size:    4.133484 MiB name: evtpool_385619
00:07:00.198    size:    0.026123 MiB name: Session_Pool
00:07:00.198  end mempools-------
00:07:00.198  6 memzones totaling size 4.142822 MiB
00:07:00.198    size:    1.000366 MiB name: RG_ring_0_385619
00:07:00.198    size:    1.000366 MiB name: RG_ring_1_385619
00:07:00.198    size:    1.000366 MiB name: RG_ring_4_385619
00:07:00.198    size:    1.000366 MiB name: RG_ring_5_385619
00:07:00.198    size:    0.125366 MiB name: RG_ring_2_385619
00:07:00.198    size:    0.015991 MiB name: RG_ring_3_385619
00:07:00.198  end memzones-------
00:07:00.198   18:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:07:00.198  heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15
00:07:00.198    list of free elements. size: 10.862488 MiB
00:07:00.198      element at address: 0x200018a00000 with size:    0.999878 MiB
00:07:00.198      element at address: 0x200018c00000 with size:    0.999878 MiB
00:07:00.198      element at address: 0x200000400000 with size:    0.998535 MiB
00:07:00.198      element at address: 0x200031800000 with size:    0.994446 MiB
00:07:00.198      element at address: 0x200006400000 with size:    0.959839 MiB
00:07:00.198      element at address: 0x200012c00000 with size:    0.954285 MiB
00:07:00.198      element at address: 0x200018e00000 with size:    0.936584 MiB
00:07:00.198      element at address: 0x200000200000 with size:    0.717346 MiB
00:07:00.198      element at address: 0x20001a600000 with size:    0.582886 MiB
00:07:00.198      element at address: 0x200000c00000 with size:    0.495422 MiB
00:07:00.198      element at address: 0x20000a600000 with size:    0.490723 MiB
00:07:00.198      element at address: 0x200019000000 with size:    0.485657 MiB
00:07:00.198      element at address: 0x200003e00000 with size:    0.481934 MiB
00:07:00.198      element at address: 0x200027a00000 with size:    0.410034 MiB
00:07:00.198      element at address: 0x200000800000 with size:    0.355042 MiB
00:07:00.198    list of standard malloc elements. size: 199.218628 MiB
00:07:00.198      element at address: 0x20000a7fff80 with size:  132.000122 MiB
00:07:00.198      element at address: 0x2000065fff80 with size:   64.000122 MiB
00:07:00.198      element at address: 0x200018afff80 with size:    1.000122 MiB
00:07:00.198      element at address: 0x200018cfff80 with size:    1.000122 MiB
00:07:00.198      element at address: 0x200018efff80 with size:    1.000122 MiB
00:07:00.198      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:07:00.198      element at address: 0x200018eeff00 with size:    0.062622 MiB
00:07:00.198      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:07:00.198      element at address: 0x200018eefdc0 with size:    0.000305 MiB
00:07:00.198      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000004ffa00 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000004ffac0 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000004ffb80 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000004ffd80 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000004ffe40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000085ae40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000085b040 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000085f300 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000087f5c0 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000087f680 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000008ff940 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000008ffb40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200000c7ed40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200000cff000 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200000cff0c0 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200003e7b600 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200003e7b6c0 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200003efb980 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000064fdd80 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000a67da00 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000a67dac0 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20000a6fdd80 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200012cf44c0 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200018eefc40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200018eefd00 with size:    0.000183 MiB
00:07:00.198      element at address: 0x2000190bc740 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20001a695380 with size:    0.000183 MiB
00:07:00.198      element at address: 0x20001a695440 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200027a68f80 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200027a69040 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200027a6fc40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200027a6fe40 with size:    0.000183 MiB
00:07:00.198      element at address: 0x200027a6ff00 with size:    0.000183 MiB
00:07:00.198    list of memzone associated elements. size: 599.918884 MiB
00:07:00.198      element at address: 0x20001a695500 with size:  211.416748 MiB
00:07:00.198        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:07:00.198      element at address: 0x200027a6ffc0 with size:  157.562561 MiB
00:07:00.198        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:07:00.198      element at address: 0x200012df4780 with size:   92.045044 MiB
00:07:00.198        associated memzone info: size:   92.044922 MiB name: MP_bdev_io_385619_0
00:07:00.198      element at address: 0x200000dff380 with size:   48.003052 MiB
00:07:00.198        associated memzone info: size:   48.002930 MiB name: MP_msgpool_385619_0
00:07:00.198      element at address: 0x200003ffdb80 with size:   36.008911 MiB
00:07:00.198        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_385619_0
00:07:00.198      element at address: 0x2000191be940 with size:   20.255554 MiB
00:07:00.198        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:07:00.198      element at address: 0x2000319feb40 with size:   18.005066 MiB
00:07:00.198        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:07:00.199      element at address: 0x2000004fff00 with size:    3.000244 MiB
00:07:00.199        associated memzone info: size:    3.000122 MiB name: MP_evtpool_385619_0
00:07:00.199      element at address: 0x2000009ffe00 with size:    2.000488 MiB
00:07:00.199        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_385619
00:07:00.199      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:07:00.199        associated memzone info: size:    1.007996 MiB name: MP_evtpool_385619
00:07:00.199      element at address: 0x20000a6fde40 with size:    1.008118 MiB
00:07:00.199        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:07:00.199      element at address: 0x2000190bc800 with size:    1.008118 MiB
00:07:00.199        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:07:00.199      element at address: 0x2000064fde40 with size:    1.008118 MiB
00:07:00.199        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:07:00.199      element at address: 0x200003efba40 with size:    1.008118 MiB
00:07:00.199        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:07:00.199      element at address: 0x200000cff180 with size:    1.000488 MiB
00:07:00.199        associated memzone info: size:    1.000366 MiB name: RG_ring_0_385619
00:07:00.199      element at address: 0x2000008ffc00 with size:    1.000488 MiB
00:07:00.199        associated memzone info: size:    1.000366 MiB name: RG_ring_1_385619
00:07:00.199      element at address: 0x200012cf4580 with size:    1.000488 MiB
00:07:00.199        associated memzone info: size:    1.000366 MiB name: RG_ring_4_385619
00:07:00.199      element at address: 0x2000318fe940 with size:    1.000488 MiB
00:07:00.199        associated memzone info: size:    1.000366 MiB name: RG_ring_5_385619
00:07:00.199      element at address: 0x20000087f740 with size:    0.500488 MiB
00:07:00.199        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_385619
00:07:00.199      element at address: 0x200000c7ee00 with size:    0.500488 MiB
00:07:00.199        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_385619
00:07:00.199      element at address: 0x20000a67db80 with size:    0.500488 MiB
00:07:00.199        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:07:00.199      element at address: 0x200003e7b780 with size:    0.500488 MiB
00:07:00.199        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:07:00.199      element at address: 0x20001907c540 with size:    0.250488 MiB
00:07:00.199        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:07:00.199      element at address: 0x2000002b7a40 with size:    0.125488 MiB
00:07:00.199        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_385619
00:07:00.199      element at address: 0x20000085f3c0 with size:    0.125488 MiB
00:07:00.199        associated memzone info: size:    0.125366 MiB name: RG_ring_2_385619
00:07:00.199      element at address: 0x2000064f5b80 with size:    0.031738 MiB
00:07:00.199        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:07:00.199      element at address: 0x200027a69100 with size:    0.023743 MiB
00:07:00.199        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:07:00.199      element at address: 0x20000085b100 with size:    0.016113 MiB
00:07:00.199        associated memzone info: size:    0.015991 MiB name: RG_ring_3_385619
00:07:00.199      element at address: 0x200027a6f240 with size:    0.002441 MiB
00:07:00.199        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:07:00.199      element at address: 0x2000004ffc40 with size:    0.000305 MiB
00:07:00.199        associated memzone info: size:    0.000183 MiB name: MP_msgpool_385619
00:07:00.199      element at address: 0x2000008ffa00 with size:    0.000305 MiB
00:07:00.199        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_385619
00:07:00.199      element at address: 0x20000085af00 with size:    0.000305 MiB
00:07:00.199        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_385619
00:07:00.199      element at address: 0x200027a6fd00 with size:    0.000305 MiB
00:07:00.199        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:07:00.199   18:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:07:00.199   18:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 385619
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 385619 ']'
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 385619
00:07:00.199    18:27:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:00.199    18:27:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 385619
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 385619'
00:07:00.199  killing process with pid 385619
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 385619
00:07:00.199   18:27:46 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 385619
00:07:00.769  
00:07:00.769  real	0m1.598s
00:07:00.769  user	0m1.622s
00:07:00.769  sys	0m0.496s
00:07:00.769   18:27:47 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:00.769   18:27:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:00.769  ************************************
00:07:00.769  END TEST dpdk_mem_utility
00:07:00.769  ************************************
00:07:00.769   18:27:47  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:07:00.769   18:27:47  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:00.769   18:27:47  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.769   18:27:47  -- common/autotest_common.sh@10 -- # set +x
00:07:00.769  ************************************
00:07:00.769  START TEST event
00:07:00.769  ************************************
00:07:00.769   18:27:47 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:07:00.769  * Looking for test storage...
00:07:00.769  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:07:00.769    18:27:47 event -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:00.769     18:27:47 event -- common/autotest_common.sh@1693 -- # lcov --version
00:07:00.769     18:27:47 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:00.769    18:27:47 event -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:00.769    18:27:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:00.770    18:27:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:00.770    18:27:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:00.770    18:27:47 event -- scripts/common.sh@336 -- # IFS=.-:
00:07:00.770    18:27:47 event -- scripts/common.sh@336 -- # read -ra ver1
00:07:00.770    18:27:47 event -- scripts/common.sh@337 -- # IFS=.-:
00:07:00.770    18:27:47 event -- scripts/common.sh@337 -- # read -ra ver2
00:07:00.770    18:27:47 event -- scripts/common.sh@338 -- # local 'op=<'
00:07:00.770    18:27:47 event -- scripts/common.sh@340 -- # ver1_l=2
00:07:00.770    18:27:47 event -- scripts/common.sh@341 -- # ver2_l=1
00:07:00.770    18:27:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:00.770    18:27:47 event -- scripts/common.sh@344 -- # case "$op" in
00:07:00.770    18:27:47 event -- scripts/common.sh@345 -- # : 1
00:07:00.770    18:27:47 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:00.770    18:27:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:00.770     18:27:47 event -- scripts/common.sh@365 -- # decimal 1
00:07:00.770     18:27:47 event -- scripts/common.sh@353 -- # local d=1
00:07:00.770     18:27:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:00.770     18:27:47 event -- scripts/common.sh@355 -- # echo 1
00:07:00.770    18:27:47 event -- scripts/common.sh@365 -- # ver1[v]=1
00:07:00.770     18:27:47 event -- scripts/common.sh@366 -- # decimal 2
00:07:00.770     18:27:47 event -- scripts/common.sh@353 -- # local d=2
00:07:00.770     18:27:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:00.770     18:27:47 event -- scripts/common.sh@355 -- # echo 2
00:07:00.770    18:27:47 event -- scripts/common.sh@366 -- # ver2[v]=2
00:07:00.770    18:27:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:00.770    18:27:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:00.770    18:27:47 event -- scripts/common.sh@368 -- # return 0
00:07:00.770    18:27:47 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:00.770    18:27:47 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:00.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.770  		--rc genhtml_branch_coverage=1
00:07:00.770  		--rc genhtml_function_coverage=1
00:07:00.770  		--rc genhtml_legend=1
00:07:00.770  		--rc geninfo_all_blocks=1
00:07:00.770  		--rc geninfo_unexecuted_blocks=1
00:07:00.770  		
00:07:00.770  		'
00:07:00.770    18:27:47 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:00.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.770  		--rc genhtml_branch_coverage=1
00:07:00.770  		--rc genhtml_function_coverage=1
00:07:00.770  		--rc genhtml_legend=1
00:07:00.770  		--rc geninfo_all_blocks=1
00:07:00.770  		--rc geninfo_unexecuted_blocks=1
00:07:00.770  		
00:07:00.770  		'
00:07:00.770    18:27:47 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:00.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.770  		--rc genhtml_branch_coverage=1
00:07:00.770  		--rc genhtml_function_coverage=1
00:07:00.770  		--rc genhtml_legend=1
00:07:00.770  		--rc geninfo_all_blocks=1
00:07:00.770  		--rc geninfo_unexecuted_blocks=1
00:07:00.770  		
00:07:00.770  		'
00:07:00.770    18:27:47 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:00.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.770  		--rc genhtml_branch_coverage=1
00:07:00.770  		--rc genhtml_function_coverage=1
00:07:00.770  		--rc genhtml_legend=1
00:07:00.770  		--rc geninfo_all_blocks=1
00:07:00.770  		--rc geninfo_unexecuted_blocks=1
00:07:00.770  		
00:07:00.770  		'
00:07:00.770   18:27:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/bdev/nbd_common.sh
00:07:00.770    18:27:47 event -- bdev/nbd_common.sh@6 -- # set -e
00:07:00.770   18:27:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:00.770   18:27:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:00.770   18:27:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.770   18:27:47 event -- common/autotest_common.sh@10 -- # set +x
00:07:00.770  ************************************
00:07:00.770  START TEST event_perf
00:07:00.770  ************************************
00:07:00.770   18:27:47 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:00.770  Running I/O for 1 seconds...[2024-11-17 18:27:47.283948] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:00.770  [2024-11-17 18:27:47.284043] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385903 ]
00:07:01.030  [2024-11-17 18:27:47.381491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:01.030  [2024-11-17 18:27:47.415039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:01.030  [2024-11-17 18:27:47.415104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:01.030  [2024-11-17 18:27:47.415096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:01.030  [2024-11-17 18:27:47.415156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:01.969  Running I/O for 1 seconds...
00:07:01.969  lcore  0:   200345
00:07:01.969  lcore  1:   200345
00:07:01.969  lcore  2:   200345
00:07:01.969  lcore  3:   200344
00:07:01.969  done.
00:07:01.969  
00:07:01.969  real	0m1.219s
00:07:01.969  user	0m4.094s
00:07:01.969  sys	0m0.118s
00:07:01.969   18:27:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:01.969   18:27:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:07:01.969  ************************************
00:07:01.969  END TEST event_perf
00:07:01.969  ************************************
00:07:01.969   18:27:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:07:01.969   18:27:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:01.969   18:27:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:01.969   18:27:48 event -- common/autotest_common.sh@10 -- # set +x
00:07:01.969  ************************************
00:07:01.969  START TEST event_reactor
00:07:01.969  ************************************
00:07:01.969   18:27:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:07:02.229  [2024-11-17 18:27:48.558150] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:02.229  [2024-11-17 18:27:48.558288] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386135 ]
00:07:02.229  [2024-11-17 18:27:48.657457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:02.229  [2024-11-17 18:27:48.686783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:03.167  test_start
00:07:03.167  oneshot
00:07:03.167  tick 100
00:07:03.167  tick 100
00:07:03.167  tick 250
00:07:03.167  tick 100
00:07:03.167  tick 100
00:07:03.167  tick 100
00:07:03.167  tick 250
00:07:03.167  tick 500
00:07:03.167  tick 100
00:07:03.167  tick 100
00:07:03.167  tick 250
00:07:03.167  tick 100
00:07:03.167  tick 100
00:07:03.167  test_end
00:07:03.167  
00:07:03.167  real	0m1.216s
00:07:03.167  user	0m1.106s
00:07:03.167  sys	0m0.105s
00:07:03.167   18:27:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:03.167   18:27:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:07:03.167  ************************************
00:07:03.167  END TEST event_reactor
00:07:03.167  ************************************
00:07:03.427   18:27:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:03.427   18:27:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:03.427   18:27:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:03.427   18:27:49 event -- common/autotest_common.sh@10 -- # set +x
00:07:03.427  ************************************
00:07:03.427  START TEST event_reactor_perf
00:07:03.427  ************************************
00:07:03.427   18:27:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:03.427  [2024-11-17 18:27:49.823427] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:03.427  [2024-11-17 18:27:49.823534] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386417 ]
00:07:03.427  [2024-11-17 18:27:49.922525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:03.427  [2024-11-17 18:27:49.952018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:04.807  test_start
00:07:04.807  test_end
00:07:04.807  Performance:   354323 events per second
00:07:04.807  
00:07:04.807  real	0m1.223s
00:07:04.807  user	0m1.110s
00:07:04.807  sys	0m0.106s
00:07:04.807   18:27:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:04.807   18:27:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:07:04.807  ************************************
00:07:04.807  END TEST event_reactor_perf
00:07:04.807  ************************************
00:07:04.807    18:27:51 event -- event/event.sh@49 -- # uname -s
00:07:04.807   18:27:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:07:04.807   18:27:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:07:04.807   18:27:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:04.807   18:27:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.807   18:27:51 event -- common/autotest_common.sh@10 -- # set +x
00:07:04.807  ************************************
00:07:04.807  START TEST event_scheduler
00:07:04.807  ************************************
00:07:04.807   18:27:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:07:04.807  * Looking for test storage...
00:07:04.807  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler
00:07:04.807    18:27:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:04.807     18:27:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version
00:07:04.807     18:27:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:04.807    18:27:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:04.807     18:27:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:04.807    18:27:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:07:04.807    18:27:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:04.807    18:27:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:04.807  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.807  		--rc genhtml_branch_coverage=1
00:07:04.807  		--rc genhtml_function_coverage=1
00:07:04.807  		--rc genhtml_legend=1
00:07:04.807  		--rc geninfo_all_blocks=1
00:07:04.807  		--rc geninfo_unexecuted_blocks=1
00:07:04.807  		
00:07:04.807  		'
00:07:04.807    18:27:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:04.807  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.807  		--rc genhtml_branch_coverage=1
00:07:04.807  		--rc genhtml_function_coverage=1
00:07:04.807  		--rc genhtml_legend=1
00:07:04.807  		--rc geninfo_all_blocks=1
00:07:04.807  		--rc geninfo_unexecuted_blocks=1
00:07:04.807  		
00:07:04.807  		'
00:07:04.807    18:27:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:04.807  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.807  		--rc genhtml_branch_coverage=1
00:07:04.807  		--rc genhtml_function_coverage=1
00:07:04.807  		--rc genhtml_legend=1
00:07:04.807  		--rc geninfo_all_blocks=1
00:07:04.807  		--rc geninfo_unexecuted_blocks=1
00:07:04.807  		
00:07:04.807  		'
00:07:04.807    18:27:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:04.807  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.807  		--rc genhtml_branch_coverage=1
00:07:04.807  		--rc genhtml_function_coverage=1
00:07:04.807  		--rc genhtml_legend=1
00:07:04.807  		--rc geninfo_all_blocks=1
00:07:04.807  		--rc geninfo_unexecuted_blocks=1
00:07:04.807  		
00:07:04.807  		'
00:07:04.807   18:27:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:07:04.807   18:27:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:07:04.807   18:27:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=386816
00:07:04.807   18:27:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:07:04.807   18:27:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 386816
00:07:04.807   18:27:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 386816 ']'
00:07:04.807   18:27:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:04.807   18:27:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:04.807   18:27:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:04.807  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:04.807   18:27:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:04.807   18:27:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:04.807  [2024-11-17 18:27:51.250789] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:04.807  [2024-11-17 18:27:51.250930] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386816 ]
00:07:04.807  [2024-11-17 18:27:51.367477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:05.067  [2024-11-17 18:27:51.404815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:05.067  [2024-11-17 18:27:51.404949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:05.067  [2024-11-17 18:27:51.404955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:05.067  [2024-11-17 18:27:51.404954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:07:05.633   18:27:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:05.633  [2024-11-17 18:27:52.103776] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:07:05.633  [2024-11-17 18:27:52.103819] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:07:05.633  [2024-11-17 18:27:52.103852] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:07:05.633  [2024-11-17 18:27:52.103867] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:07:05.633  [2024-11-17 18:27:52.103878] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.633   18:27:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:05.633  [2024-11-17 18:27:52.197951] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.633   18:27:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:05.633   18:27:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  ************************************
00:07:05.893  START TEST scheduler_create_thread
00:07:05.893  ************************************
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  2
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  3
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  4
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  5
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  6
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  7
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  8
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  9
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893  10
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893    18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:07:05.893    18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893    18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893    18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:05.893   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:05.893    18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:07:05.893    18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:05.894    18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:06.461    18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:06.461   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:07:06.461   18:27:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:07:06.461   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:06.461   18:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:07.841   18:27:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:07.841  
00:07:07.841  real	0m1.756s
00:07:07.841  user	0m0.024s
00:07:07.841  sys	0m0.002s
00:07:07.841   18:27:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:07.841   18:27:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:07.841  ************************************
00:07:07.841  END TEST scheduler_create_thread
00:07:07.841  ************************************
00:07:07.842   18:27:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:07.842   18:27:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 386816
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 386816 ']'
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 386816
00:07:07.842    18:27:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:07.842    18:27:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386816
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386816'
00:07:07.842  killing process with pid 386816
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 386816
00:07:07.842   18:27:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 386816
00:07:08.101  [2024-11-17 18:27:54.461416] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:07:08.361  
00:07:08.361  real	0m3.617s
00:07:08.361  user	0m6.489s
00:07:08.361  sys	0m0.422s
00:07:08.361   18:27:54 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:08.361   18:27:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:08.361  ************************************
00:07:08.361  END TEST event_scheduler
00:07:08.361  ************************************
00:07:08.361   18:27:54 event -- event/event.sh@51 -- # modprobe -n nbd
00:07:08.361   18:27:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:07:08.361   18:27:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:08.361   18:27:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:08.361   18:27:54 event -- common/autotest_common.sh@10 -- # set +x
00:07:08.361  ************************************
00:07:08.361  START TEST app_repeat
00:07:08.361  ************************************
00:07:08.361   18:27:54 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=387482
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 387482'
00:07:08.361  Process app_repeat pid: 387482
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:07:08.361  spdk_app_start Round 0
00:07:08.361   18:27:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 387482 /var/tmp/spdk-nbd.sock
00:07:08.361   18:27:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 387482 ']'
00:07:08.361   18:27:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:08.361   18:27:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:08.361   18:27:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:08.361  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:08.361   18:27:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:08.361   18:27:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:08.361  [2024-11-17 18:27:54.769755] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:08.361  [2024-11-17 18:27:54.769866] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387482 ]
00:07:08.361  [2024-11-17 18:27:54.872857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:08.362  [2024-11-17 18:27:54.903302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:08.362  [2024-11-17 18:27:54.903345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:08.621   18:27:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:08.621   18:27:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:08.621   18:27:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:08.881  Malloc0
00:07:08.881   18:27:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:08.881  Malloc1
00:07:08.881   18:27:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:08.881   18:27:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:09.143  /dev/nbd0
00:07:09.143    18:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:09.143   18:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:09.143  1+0 records in
00:07:09.143  1+0 records out
00:07:09.143  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201936 s, 20.3 MB/s
00:07:09.143    18:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:09.143   18:27:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:09.143   18:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:09.143   18:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:09.143   18:27:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:09.404  /dev/nbd1
00:07:09.404    18:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:09.404   18:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:09.404  1+0 records in
00:07:09.404  1+0 records out
00:07:09.404  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160951 s, 25.4 MB/s
00:07:09.404    18:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:09.404   18:27:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:09.404   18:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:09.404   18:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:09.404    18:27:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:09.404    18:27:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:09.404     18:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:09.706    18:27:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:09.706    {
00:07:09.706      "nbd_device": "/dev/nbd0",
00:07:09.706      "bdev_name": "Malloc0"
00:07:09.706    },
00:07:09.706    {
00:07:09.706      "nbd_device": "/dev/nbd1",
00:07:09.706      "bdev_name": "Malloc1"
00:07:09.706    }
00:07:09.706  ]'
00:07:09.706     18:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:09.706     18:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:09.706    {
00:07:09.706      "nbd_device": "/dev/nbd0",
00:07:09.706      "bdev_name": "Malloc0"
00:07:09.706    },
00:07:09.706    {
00:07:09.706      "nbd_device": "/dev/nbd1",
00:07:09.706      "bdev_name": "Malloc1"
00:07:09.706    }
00:07:09.706  ]'
00:07:09.706    18:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:09.706  /dev/nbd1'
00:07:09.706     18:27:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:09.706  /dev/nbd1'
00:07:09.706     18:27:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:09.706    18:27:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:09.706    18:27:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:09.706  256+0 records in
00:07:09.706  256+0 records out
00:07:09.706  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00363123 s, 289 MB/s
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:09.706  256+0 records in
00:07:09.706  256+0 records out
00:07:09.706  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018962 s, 55.3 MB/s
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:09.706  256+0 records in
00:07:09.706  256+0 records out
00:07:09.706  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021651 s, 48.4 MB/s
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:09.706   18:27:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:09.967    18:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:09.967   18:27:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:10.227    18:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:10.227   18:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:10.227   18:27:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:10.227   18:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:10.227   18:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:10.227   18:27:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:10.227   18:27:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:10.227   18:27:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:10.227    18:27:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:10.227    18:27:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:10.227     18:27:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:10.486    18:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:10.486     18:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:10.486     18:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:10.486    18:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:10.486     18:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:10.486     18:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:10.486     18:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:10.486    18:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:10.486    18:27:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:10.486   18:27:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:10.486   18:27:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:10.486   18:27:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:10.486   18:27:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:10.746   18:27:57 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:11.006  [2024-11-17 18:27:57.468592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:11.006  [2024-11-17 18:27:57.495546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:11.006  [2024-11-17 18:27:57.495549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:11.006  [2024-11-17 18:27:57.555658] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:11.006  [2024-11-17 18:27:57.555744] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:14.295   18:28:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:14.295   18:28:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:07:14.295  spdk_app_start Round 1
00:07:14.295   18:28:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 387482 /var/tmp/spdk-nbd.sock
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 387482 ']'
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:14.295  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:14.295   18:28:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:14.295   18:28:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:14.295  Malloc0
00:07:14.295   18:28:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:14.555  Malloc1
00:07:14.555   18:28:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:14.555   18:28:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:14.813  /dev/nbd0
00:07:14.813    18:28:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:14.813   18:28:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:14.814  1+0 records in
00:07:14.814  1+0 records out
00:07:14.814  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00012343 s, 33.2 MB/s
00:07:14.814    18:28:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:14.814   18:28:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:14.814   18:28:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:14.814   18:28:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:14.814   18:28:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:15.072  /dev/nbd1
00:07:15.072    18:28:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:15.072   18:28:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:15.072  1+0 records in
00:07:15.072  1+0 records out
00:07:15.072  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267116 s, 15.3 MB/s
00:07:15.072    18:28:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:15.072   18:28:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:15.073   18:28:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:15.073   18:28:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:15.073   18:28:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:15.073   18:28:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:15.073    18:28:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:15.073    18:28:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:15.073     18:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:15.331    18:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:15.331    {
00:07:15.331      "nbd_device": "/dev/nbd0",
00:07:15.331      "bdev_name": "Malloc0"
00:07:15.331    },
00:07:15.331    {
00:07:15.331      "nbd_device": "/dev/nbd1",
00:07:15.331      "bdev_name": "Malloc1"
00:07:15.331    }
00:07:15.331  ]'
00:07:15.331     18:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:15.331    {
00:07:15.331      "nbd_device": "/dev/nbd0",
00:07:15.331      "bdev_name": "Malloc0"
00:07:15.331    },
00:07:15.331    {
00:07:15.331      "nbd_device": "/dev/nbd1",
00:07:15.331      "bdev_name": "Malloc1"
00:07:15.331    }
00:07:15.331  ]'
00:07:15.331     18:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:15.331    18:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:15.331  /dev/nbd1'
00:07:15.331     18:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:15.331  /dev/nbd1'
00:07:15.332     18:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:15.332    18:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:15.332    18:28:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:15.332  256+0 records in
00:07:15.332  256+0 records out
00:07:15.332  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00368562 s, 285 MB/s
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:15.332  256+0 records in
00:07:15.332  256+0 records out
00:07:15.332  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187781 s, 55.8 MB/s
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:15.332  256+0 records in
00:07:15.332  256+0 records out
00:07:15.332  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211858 s, 49.5 MB/s
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:15.332   18:28:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:15.590    18:28:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:15.591   18:28:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:15.849    18:28:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:15.849   18:28:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:15.849   18:28:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:15.849   18:28:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:15.849   18:28:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:15.850   18:28:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:15.850   18:28:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:15.850   18:28:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:15.850    18:28:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:15.850    18:28:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:15.850     18:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:16.109    18:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:16.109     18:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:16.109     18:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:16.109    18:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:16.109     18:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:16.109     18:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:16.109     18:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:16.109    18:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:16.109    18:28:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:16.109   18:28:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:16.109   18:28:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:16.109   18:28:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:16.109   18:28:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:16.368   18:28:02 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:16.627  [2024-11-17 18:28:03.002423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:16.627  [2024-11-17 18:28:03.027985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:16.627  [2024-11-17 18:28:03.027997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:16.627  [2024-11-17 18:28:03.084946] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:16.627  [2024-11-17 18:28:03.085014] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:19.916   18:28:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:19.916   18:28:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:07:19.916  spdk_app_start Round 2
00:07:19.916   18:28:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 387482 /var/tmp/spdk-nbd.sock
00:07:19.916   18:28:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 387482 ']'
00:07:19.916   18:28:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:19.916   18:28:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:19.916   18:28:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:19.916  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:19.916   18:28:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:19.916   18:28:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:19.916   18:28:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:19.916   18:28:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:19.916   18:28:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:19.916  Malloc0
00:07:19.916   18:28:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:19.916  Malloc1
00:07:20.177   18:28:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:20.177  /dev/nbd0
00:07:20.177    18:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:20.177   18:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:20.177   18:28:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:20.177   18:28:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:20.177   18:28:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:20.177   18:28:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:20.177   18:28:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:20.437  1+0 records in
00:07:20.437  1+0 records out
00:07:20.437  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196132 s, 20.9 MB/s
00:07:20.437    18:28:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:20.437   18:28:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:20.437   18:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:20.437   18:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:20.437   18:28:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:20.696  /dev/nbd1
00:07:20.696    18:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:20.696   18:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:20.696   18:28:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:20.696  1+0 records in
00:07:20.696  1+0 records out
00:07:20.696  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218247 s, 18.8 MB/s
00:07:20.697    18:28:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:20.697   18:28:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:20.697   18:28:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:07:20.697   18:28:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:20.697   18:28:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:20.697   18:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:20.697   18:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:20.697    18:28:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:20.697    18:28:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:20.697     18:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:20.956    18:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:20.956    {
00:07:20.956      "nbd_device": "/dev/nbd0",
00:07:20.956      "bdev_name": "Malloc0"
00:07:20.956    },
00:07:20.956    {
00:07:20.956      "nbd_device": "/dev/nbd1",
00:07:20.956      "bdev_name": "Malloc1"
00:07:20.956    }
00:07:20.956  ]'
00:07:20.956     18:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:20.956    {
00:07:20.956      "nbd_device": "/dev/nbd0",
00:07:20.956      "bdev_name": "Malloc0"
00:07:20.956    },
00:07:20.956    {
00:07:20.956      "nbd_device": "/dev/nbd1",
00:07:20.956      "bdev_name": "Malloc1"
00:07:20.956    }
00:07:20.956  ]'
00:07:20.956     18:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:20.956    18:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:20.956  /dev/nbd1'
00:07:20.956     18:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:20.956  /dev/nbd1'
00:07:20.956     18:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:20.956    18:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:20.956    18:28:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:20.956  256+0 records in
00:07:20.956  256+0 records out
00:07:20.956  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369809 s, 284 MB/s
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:20.956  256+0 records in
00:07:20.956  256+0 records out
00:07:20.956  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192496 s, 54.5 MB/s
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:20.956  256+0 records in
00:07:20.956  256+0 records out
00:07:20.956  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202205 s, 51.9 MB/s
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:20.956   18:28:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:20.957   18:28:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:21.216    18:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:21.216   18:28:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:21.477    18:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:21.477   18:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:21.477   18:28:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:21.477   18:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:21.477   18:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:21.477   18:28:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:21.477   18:28:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:21.477   18:28:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:21.477    18:28:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:21.477    18:28:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:21.477     18:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:21.736    18:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:21.736     18:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:21.736     18:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:21.736    18:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:21.736     18:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:21.736     18:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:21.736     18:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:21.737    18:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:21.737    18:28:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:21.737   18:28:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:21.737   18:28:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:21.737   18:28:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:21.737   18:28:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:21.996   18:28:08 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:21.996  [2024-11-17 18:28:08.571122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:22.256  [2024-11-17 18:28:08.597782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:22.256  [2024-11-17 18:28:08.597783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:22.256  [2024-11-17 18:28:08.653697] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:22.256  [2024-11-17 18:28:08.653763] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:25.546   18:28:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 387482 /var/tmp/spdk-nbd.sock
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 387482 ']'
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:25.546  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:25.546   18:28:11 event.app_repeat -- event/event.sh@39 -- # killprocess 387482
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 387482 ']'
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 387482
00:07:25.546    18:28:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:25.546    18:28:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387482
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387482'
00:07:25.546  killing process with pid 387482
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 387482
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 387482
00:07:25.546  spdk_app_start is called in Round 0.
00:07:25.546  Shutdown signal received, stop current app iteration
00:07:25.546  Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization...
00:07:25.546  spdk_app_start is called in Round 1.
00:07:25.546  Shutdown signal received, stop current app iteration
00:07:25.546  Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization...
00:07:25.546  spdk_app_start is called in Round 2.
00:07:25.546  Shutdown signal received, stop current app iteration
00:07:25.546  Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization...
00:07:25.546  spdk_app_start is called in Round 3.
00:07:25.546  Shutdown signal received, stop current app iteration
00:07:25.546   18:28:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:07:25.546   18:28:11 event.app_repeat -- event/event.sh@42 -- # return 0
00:07:25.546  
00:07:25.546  real	0m17.109s
00:07:25.546  user	0m37.332s
00:07:25.546  sys	0m2.596s
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:25.546   18:28:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:25.546  ************************************
00:07:25.546  END TEST app_repeat
00:07:25.546  ************************************
00:07:25.546   18:28:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:07:25.546   18:28:11 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:07:25.546   18:28:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.546   18:28:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.546   18:28:11 event -- common/autotest_common.sh@10 -- # set +x
00:07:25.546  ************************************
00:07:25.546  START TEST cpu_locks
00:07:25.546  ************************************
00:07:25.546   18:28:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:07:25.546  * Looking for test storage...
00:07:25.546  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:07:25.546    18:28:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:25.546     18:28:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version
00:07:25.546     18:28:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:25.546    18:28:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:25.546     18:28:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:07:25.546     18:28:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:07:25.546     18:28:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:25.546     18:28:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:07:25.546    18:28:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:07:25.546     18:28:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:07:25.546     18:28:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:07:25.546     18:28:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:25.546     18:28:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:07:25.546    18:28:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:07:25.547    18:28:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:25.547    18:28:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:25.547    18:28:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:07:25.547    18:28:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:25.547    18:28:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:25.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:25.547  		--rc genhtml_branch_coverage=1
00:07:25.547  		--rc genhtml_function_coverage=1
00:07:25.547  		--rc genhtml_legend=1
00:07:25.547  		--rc geninfo_all_blocks=1
00:07:25.547  		--rc geninfo_unexecuted_blocks=1
00:07:25.547  		
00:07:25.547  		'
00:07:25.547    18:28:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:25.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:25.547  		--rc genhtml_branch_coverage=1
00:07:25.547  		--rc genhtml_function_coverage=1
00:07:25.547  		--rc genhtml_legend=1
00:07:25.547  		--rc geninfo_all_blocks=1
00:07:25.547  		--rc geninfo_unexecuted_blocks=1
00:07:25.547  		
00:07:25.547  		'
00:07:25.547    18:28:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:25.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:25.547  		--rc genhtml_branch_coverage=1
00:07:25.547  		--rc genhtml_function_coverage=1
00:07:25.547  		--rc genhtml_legend=1
00:07:25.547  		--rc geninfo_all_blocks=1
00:07:25.547  		--rc geninfo_unexecuted_blocks=1
00:07:25.547  		
00:07:25.547  		'
00:07:25.547    18:28:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:25.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:25.547  		--rc genhtml_branch_coverage=1
00:07:25.547  		--rc genhtml_function_coverage=1
00:07:25.547  		--rc genhtml_legend=1
00:07:25.547  		--rc geninfo_all_blocks=1
00:07:25.547  		--rc geninfo_unexecuted_blocks=1
00:07:25.547  		
00:07:25.547  		'
00:07:25.547   18:28:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:07:25.547   18:28:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:07:25.547   18:28:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:07:25.547   18:28:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:07:25.547   18:28:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.547   18:28:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.547   18:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:25.547  ************************************
00:07:25.547  START TEST default_locks
00:07:25.547  ************************************
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=390614
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 390614
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 390614 ']'
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:25.547  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:25.547   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:25.547  [2024-11-17 18:28:12.118201] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:25.547  [2024-11-17 18:28:12.118345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390614 ]
00:07:25.806  [2024-11-17 18:28:12.228602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:25.806  [2024-11-17 18:28:12.259579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:26.745   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:26.745   18:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:07:26.745   18:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 390614
00:07:26.745   18:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 390614
00:07:26.745   18:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:26.745  lslocks: write error
00:07:26.745   18:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 390614
00:07:26.745   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 390614 ']'
00:07:26.745   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 390614
00:07:26.745    18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:07:26.746   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:26.746    18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390614
00:07:26.746   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:26.746   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:26.746   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390614'
00:07:26.746  killing process with pid 390614
00:07:26.746   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 390614
00:07:26.746   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 390614
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 390614
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 390614
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:27.315    18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 390614
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 390614 ']'
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:27.315  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:27.315  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (390614) - No such process
00:07:27.315  ERROR: process (pid: 390614) is no longer running
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:27.315  
00:07:27.315  real	0m1.610s
00:07:27.315  user	0m1.635s
00:07:27.315  sys	0m0.553s
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.315   18:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:27.315  ************************************
00:07:27.315  END TEST default_locks
00:07:27.315  ************************************
00:07:27.315   18:28:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:07:27.315   18:28:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.315   18:28:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.315   18:28:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:27.315  ************************************
00:07:27.315  START TEST default_locks_via_rpc
00:07:27.315  ************************************
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=391046
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 391046
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 391046 ']'
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:27.315  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:27.315   18:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:27.315  [2024-11-17 18:28:13.779970] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:27.315  [2024-11-17 18:28:13.780114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391046 ]
00:07:27.575  [2024-11-17 18:28:13.893404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:27.575  [2024-11-17 18:28:13.929779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 391046
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 391046
00:07:28.143   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 391046
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 391046 ']'
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 391046
00:07:28.403    18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:28.403    18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391046
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391046'
00:07:28.403  killing process with pid 391046
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 391046
00:07:28.403   18:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 391046
00:07:28.972  
00:07:28.972  real	0m1.666s
00:07:28.972  user	0m1.695s
00:07:28.972  sys	0m0.581s
00:07:28.972   18:28:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.972   18:28:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:28.972  ************************************
00:07:28.972  END TEST default_locks_via_rpc
00:07:28.972  ************************************
00:07:28.972   18:28:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:07:28.972   18:28:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.972   18:28:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.972   18:28:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:28.972  ************************************
00:07:28.972  START TEST non_locking_app_on_locked_coremask
00:07:28.972  ************************************
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=391291
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 391291 /var/tmp/spdk.sock
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 391291 ']'
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:28.972  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:28.972   18:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:28.972  [2024-11-17 18:28:15.501782] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:28.972  [2024-11-17 18:28:15.501949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391291 ]
00:07:29.231  [2024-11-17 18:28:15.612516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:29.231  [2024-11-17 18:28:15.648338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=391498
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 391498 /var/tmp/spdk2.sock
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 391498 ']'
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:29.799  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:29.799   18:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:30.059  [2024-11-17 18:28:16.421395] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:30.059  [2024-11-17 18:28:16.421529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391498 ]
00:07:30.059  [2024-11-17 18:28:16.567031] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:30.059  [2024-11-17 18:28:16.567070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:30.318  [2024-11-17 18:28:16.636953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:30.886   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:30.886   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:30.886   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 391291
00:07:30.886   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 391291
00:07:30.886   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:31.146  lslocks: write error
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 391291
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 391291 ']'
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 391291
00:07:31.146    18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:31.146    18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391291
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391291'
00:07:31.146  killing process with pid 391291
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 391291
00:07:31.146   18:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 391291
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 391498
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 391498 ']'
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 391498
00:07:32.086    18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:32.086    18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391498
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391498'
00:07:32.086  killing process with pid 391498
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 391498
00:07:32.086   18:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 391498
00:07:32.656  
00:07:32.656  real	0m3.628s
00:07:32.656  user	0m3.796s
00:07:32.656  sys	0m1.142s
00:07:32.656   18:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:32.656   18:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:32.656  ************************************
00:07:32.656  END TEST non_locking_app_on_locked_coremask
00:07:32.656  ************************************
00:07:32.656   18:28:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:07:32.656   18:28:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:32.656   18:28:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:32.656   18:28:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:32.656  ************************************
00:07:32.656  START TEST locking_app_on_unlocked_coremask
00:07:32.656  ************************************
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=391951
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 391951 /var/tmp/spdk.sock
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 391951 ']'
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:32.656  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:32.656   18:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:32.656  [2024-11-17 18:28:19.172972] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:32.656  [2024-11-17 18:28:19.173110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391951 ]
00:07:32.915  [2024-11-17 18:28:19.290362] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:32.915  [2024-11-17 18:28:19.290405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:32.915  [2024-11-17 18:28:19.326379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=392161
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 392161 /var/tmp/spdk2.sock
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 392161 ']'
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:33.482  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:33.482   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:33.741  [2024-11-17 18:28:20.099495] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:33.741  [2024-11-17 18:28:20.099604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392161 ]
00:07:33.741  [2024-11-17 18:28:20.246964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:33.741  [2024-11-17 18:28:20.316847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:34.679   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:34.679   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:34.679   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 392161
00:07:34.679   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 392161
00:07:34.679   18:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:34.938  lslocks: write error
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 391951
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 391951 ']'
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 391951
00:07:34.938    18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:34.938    18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391951
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391951'
00:07:34.938  killing process with pid 391951
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 391951
00:07:34.938   18:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 391951
00:07:35.876   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 392161
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 392161 ']'
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 392161
00:07:35.877    18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:35.877    18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392161
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392161'
00:07:35.877  killing process with pid 392161
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 392161
00:07:35.877   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 392161
00:07:36.136  
00:07:36.136  real	0m3.595s
00:07:36.136  user	0m3.745s
00:07:36.136  sys	0m1.182s
00:07:36.136   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:36.136   18:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:36.136  ************************************
00:07:36.136  END TEST locking_app_on_unlocked_coremask
00:07:36.136  ************************************
00:07:36.136   18:28:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:07:36.136   18:28:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:36.136   18:28:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:36.136   18:28:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:36.395  ************************************
00:07:36.395  START TEST locking_app_on_locked_coremask
00:07:36.395  ************************************
00:07:36.395   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:07:36.395   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:36.395   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=392614
00:07:36.395   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 392614 /var/tmp/spdk.sock
00:07:36.396   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 392614 ']'
00:07:36.396   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:36.396   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:36.396   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:36.396  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:36.396   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:36.396   18:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:36.396  [2024-11-17 18:28:22.829217] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:36.396  [2024-11-17 18:28:22.829365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392614 ]
00:07:36.396  [2024-11-17 18:28:22.940083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:36.396  [2024-11-17 18:28:22.970307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=392824
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 392824 /var/tmp/spdk2.sock
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 392824 /var/tmp/spdk2.sock
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:37.333    18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 392824 /var/tmp/spdk2.sock
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 392824 ']'
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:37.333  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:37.333   18:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:37.333  [2024-11-17 18:28:23.757570] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:37.333  [2024-11-17 18:28:23.757699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392824 ]
00:07:37.333  [2024-11-17 18:28:23.896554] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 392614 has claimed it.
00:07:37.333  [2024-11-17 18:28:23.896615] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:37.901  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (392824) - No such process
00:07:37.901  ERROR: process (pid: 392824) is no longer running
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 392614
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 392614
00:07:37.901   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:38.161  lslocks: write error
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 392614
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 392614 ']'
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 392614
00:07:38.161    18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:38.161    18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392614
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392614'
00:07:38.161  killing process with pid 392614
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 392614
00:07:38.161   18:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 392614
00:07:38.730  
00:07:38.730  real	0m2.333s
00:07:38.730  user	0m2.535s
00:07:38.730  sys	0m0.759s
00:07:38.730   18:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:38.730   18:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:38.730  ************************************
00:07:38.730  END TEST locking_app_on_locked_coremask
00:07:38.730  ************************************
00:07:38.730   18:28:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:07:38.730   18:28:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:38.730   18:28:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:38.730   18:28:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:38.730  ************************************
00:07:38.730  START TEST locking_overlapped_coremask
00:07:38.730  ************************************
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=393060
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 393060 /var/tmp/spdk.sock
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 393060 ']'
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:38.730  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:38.730   18:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:38.730  [2024-11-17 18:28:25.210229] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:38.730  [2024-11-17 18:28:25.210374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393060 ]
00:07:38.989  [2024-11-17 18:28:25.321957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:38.989  [2024-11-17 18:28:25.354407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:38.989  [2024-11-17 18:28:25.354419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:38.989  [2024-11-17 18:28:25.354424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=393272
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 393272 /var/tmp/spdk2.sock
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 393272 /var/tmp/spdk2.sock
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:39.558    18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 393272 /var/tmp/spdk2.sock
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 393272 ']'
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:39.558  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:39.558   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:39.818  [2024-11-17 18:28:26.135616] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:39.818  [2024-11-17 18:28:26.135754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393272 ]
00:07:39.818  [2024-11-17 18:28:26.288396] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 393060 has claimed it.
00:07:39.818  [2024-11-17 18:28:26.288458] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:40.388  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (393272) - No such process
00:07:40.389  ERROR: process (pid: 393272) is no longer running
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 393060
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 393060 ']'
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 393060
00:07:40.389    18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:40.389    18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393060
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393060'
00:07:40.389  killing process with pid 393060
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 393060
00:07:40.389   18:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 393060
00:07:40.957  
00:07:40.957  real	0m2.151s
00:07:40.957  user	0m5.903s
00:07:40.957  sys	0m0.620s
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:40.957  ************************************
00:07:40.957  END TEST locking_overlapped_coremask
00:07:40.957  ************************************
00:07:40.957   18:28:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:07:40.957   18:28:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:40.957   18:28:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:40.957   18:28:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:40.957  ************************************
00:07:40.957  START TEST locking_overlapped_coremask_via_rpc
00:07:40.957  ************************************
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=393508
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 393508 /var/tmp/spdk.sock
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 393508 ']'
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:40.957  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:40.957   18:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:40.957  [2024-11-17 18:28:27.410097] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:40.957  [2024-11-17 18:28:27.410242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393508 ]
00:07:40.957  [2024-11-17 18:28:27.522048] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:40.957  [2024-11-17 18:28:27.522099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:41.216  [2024-11-17 18:28:27.563093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:41.216  [2024-11-17 18:28:27.563099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:41.216  [2024-11-17 18:28:27.563150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=393718
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 393718 /var/tmp/spdk2.sock
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 393718 ']'
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:41.784  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:41.784   18:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:41.784  [2024-11-17 18:28:28.322615] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:41.784  [2024-11-17 18:28:28.322745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393718 ]
00:07:42.044  [2024-11-17 18:28:28.471278] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:42.044  [2024-11-17 18:28:28.471325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:42.044  [2024-11-17 18:28:28.551030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:42.044  [2024-11-17 18:28:28.554988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:42.044  [2024-11-17 18:28:28.555036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:42.611    18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:42.611  [2024-11-17 18:28:29.165024] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 393508 has claimed it.
00:07:42.611  request:
00:07:42.611  {
00:07:42.611  "method": "framework_enable_cpumask_locks",
00:07:42.611  "req_id": 1
00:07:42.611  }
00:07:42.611  Got JSON-RPC error response
00:07:42.611  response:
00:07:42.611  {
00:07:42.611  "code": -32603,
00:07:42.611  "message": "Failed to claim CPU core: 2"
00:07:42.611  }
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 393508 /var/tmp/spdk.sock
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 393508 ']'
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:42.611  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:42.611   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 393718 /var/tmp/spdk2.sock
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 393718 ']'
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:42.870  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:42.870   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:43.128  
00:07:43.128  real	0m2.280s
00:07:43.128  user	0m1.034s
00:07:43.128  sys	0m0.177s
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:43.128   18:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:43.128  ************************************
00:07:43.128  END TEST locking_overlapped_coremask_via_rpc
00:07:43.128  ************************************
00:07:43.128   18:28:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:07:43.128   18:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 393508 ]]
00:07:43.128   18:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 393508
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 393508 ']'
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 393508
00:07:43.128    18:28:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:43.128    18:28:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393508
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393508'
00:07:43.128  killing process with pid 393508
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 393508
00:07:43.128   18:28:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 393508
00:07:43.695   18:28:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 393718 ]]
00:07:43.695   18:28:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 393718
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 393718 ']'
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 393718
00:07:43.695    18:28:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:43.695    18:28:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393718
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393718'
00:07:43.695  killing process with pid 393718
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 393718
00:07:43.695   18:28:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 393718
00:07:44.264   18:28:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:44.264   18:28:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:07:44.264   18:28:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 393508 ]]
00:07:44.264   18:28:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 393508
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 393508 ']'
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 393508
00:07:44.264  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (393508) - No such process
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 393508 is not found'
00:07:44.264  Process with pid 393508 is not found
00:07:44.264   18:28:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 393718 ]]
00:07:44.264   18:28:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 393718
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 393718 ']'
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 393718
00:07:44.264  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (393718) - No such process
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 393718 is not found'
00:07:44.264  Process with pid 393718 is not found
00:07:44.264   18:28:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:44.264  
00:07:44.264  real	0m18.767s
00:07:44.264  user	0m32.249s
00:07:44.264  sys	0m6.117s
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:44.264   18:28:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:44.264  ************************************
00:07:44.264  END TEST cpu_locks
00:07:44.264  ************************************
00:07:44.264  
00:07:44.264  real	0m43.573s
00:07:44.264  user	1m22.576s
00:07:44.264  sys	0m9.716s
00:07:44.264   18:28:30 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:44.264   18:28:30 event -- common/autotest_common.sh@10 -- # set +x
00:07:44.264  ************************************
00:07:44.264  END TEST event
00:07:44.264  ************************************
00:07:44.264   18:28:30  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:07:44.264   18:28:30  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:44.264   18:28:30  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:44.264   18:28:30  -- common/autotest_common.sh@10 -- # set +x
00:07:44.264  ************************************
00:07:44.264  START TEST thread
00:07:44.264  ************************************
00:07:44.264   18:28:30 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:07:44.264  * Looking for test storage...
00:07:44.264  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread
00:07:44.264    18:28:30 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:44.264     18:28:30 thread -- common/autotest_common.sh@1693 -- # lcov --version
00:07:44.264     18:28:30 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:44.264    18:28:30 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:44.264    18:28:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:44.264    18:28:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:44.264    18:28:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:44.264    18:28:30 thread -- scripts/common.sh@336 -- # IFS=.-:
00:07:44.264    18:28:30 thread -- scripts/common.sh@336 -- # read -ra ver1
00:07:44.264    18:28:30 thread -- scripts/common.sh@337 -- # IFS=.-:
00:07:44.264    18:28:30 thread -- scripts/common.sh@337 -- # read -ra ver2
00:07:44.264    18:28:30 thread -- scripts/common.sh@338 -- # local 'op=<'
00:07:44.264    18:28:30 thread -- scripts/common.sh@340 -- # ver1_l=2
00:07:44.264    18:28:30 thread -- scripts/common.sh@341 -- # ver2_l=1
00:07:44.264    18:28:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:44.264    18:28:30 thread -- scripts/common.sh@344 -- # case "$op" in
00:07:44.264    18:28:30 thread -- scripts/common.sh@345 -- # : 1
00:07:44.264    18:28:30 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:44.264    18:28:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:44.524     18:28:30 thread -- scripts/common.sh@365 -- # decimal 1
00:07:44.524     18:28:30 thread -- scripts/common.sh@353 -- # local d=1
00:07:44.524     18:28:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:44.524     18:28:30 thread -- scripts/common.sh@355 -- # echo 1
00:07:44.524    18:28:30 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:07:44.524     18:28:30 thread -- scripts/common.sh@366 -- # decimal 2
00:07:44.524     18:28:30 thread -- scripts/common.sh@353 -- # local d=2
00:07:44.524     18:28:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:44.524     18:28:30 thread -- scripts/common.sh@355 -- # echo 2
00:07:44.524    18:28:30 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:07:44.524    18:28:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:44.524    18:28:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:44.524    18:28:30 thread -- scripts/common.sh@368 -- # return 0
00:07:44.524    18:28:30 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:44.524    18:28:30 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:44.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.524  		--rc genhtml_branch_coverage=1
00:07:44.524  		--rc genhtml_function_coverage=1
00:07:44.524  		--rc genhtml_legend=1
00:07:44.524  		--rc geninfo_all_blocks=1
00:07:44.524  		--rc geninfo_unexecuted_blocks=1
00:07:44.524  		
00:07:44.524  		'
00:07:44.524    18:28:30 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:44.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.524  		--rc genhtml_branch_coverage=1
00:07:44.524  		--rc genhtml_function_coverage=1
00:07:44.524  		--rc genhtml_legend=1
00:07:44.524  		--rc geninfo_all_blocks=1
00:07:44.524  		--rc geninfo_unexecuted_blocks=1
00:07:44.524  		
00:07:44.524  		'
00:07:44.524    18:28:30 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:44.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.524  		--rc genhtml_branch_coverage=1
00:07:44.524  		--rc genhtml_function_coverage=1
00:07:44.524  		--rc genhtml_legend=1
00:07:44.524  		--rc geninfo_all_blocks=1
00:07:44.524  		--rc geninfo_unexecuted_blocks=1
00:07:44.524  		
00:07:44.524  		'
00:07:44.524    18:28:30 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:44.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.524  		--rc genhtml_branch_coverage=1
00:07:44.524  		--rc genhtml_function_coverage=1
00:07:44.524  		--rc genhtml_legend=1
00:07:44.524  		--rc geninfo_all_blocks=1
00:07:44.524  		--rc geninfo_unexecuted_blocks=1
00:07:44.524  		
00:07:44.524  		'
00:07:44.524   18:28:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:44.524   18:28:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:44.524   18:28:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:44.524   18:28:30 thread -- common/autotest_common.sh@10 -- # set +x
00:07:44.524  ************************************
00:07:44.524  START TEST thread_poller_perf
00:07:44.524  ************************************
00:07:44.524   18:28:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:44.524  [2024-11-17 18:28:30.911407] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:44.524  [2024-11-17 18:28:30.911539] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394241 ]
00:07:44.524  [2024-11-17 18:28:31.026464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:44.524  [2024-11-17 18:28:31.061034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:44.524  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:07:45.903  
[2024-11-17T17:28:32.479Z]  ======================================
00:07:45.903  
[2024-11-17T17:28:32.479Z]  busy:2207177090 (cyc)
00:07:45.903  
[2024-11-17T17:28:32.479Z]  total_run_count: 382000
00:07:45.903  
[2024-11-17T17:28:32.479Z]  tsc_hz: 2200000000 (cyc)
00:07:45.903  
[2024-11-17T17:28:32.479Z]  ======================================
00:07:45.903  
[2024-11-17T17:28:32.479Z]  poller_cost: 5777 (cyc), 2625 (nsec)
00:07:45.903  
00:07:45.903  real	0m1.250s
00:07:45.903  user	0m1.119s
00:07:45.903  sys	0m0.126s
00:07:45.903   18:28:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.903   18:28:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:45.903  ************************************
00:07:45.903  END TEST thread_poller_perf
00:07:45.903  ************************************
00:07:45.903   18:28:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:45.903   18:28:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:45.903   18:28:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.903   18:28:32 thread -- common/autotest_common.sh@10 -- # set +x
00:07:45.903  ************************************
00:07:45.903  START TEST thread_poller_perf
00:07:45.903  ************************************
00:07:45.903   18:28:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:45.903  [2024-11-17 18:28:32.207736] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:45.903  [2024-11-17 18:28:32.207862] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394471 ]
00:07:45.903  [2024-11-17 18:28:32.319517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:45.903  [2024-11-17 18:28:32.355649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:45.903  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:07:46.842  
[2024-11-17T17:28:33.418Z]  ======================================
00:07:46.842  
[2024-11-17T17:28:33.418Z]  busy:2202709804 (cyc)
00:07:46.842  
[2024-11-17T17:28:33.418Z]  total_run_count: 5042000
00:07:46.842  
[2024-11-17T17:28:33.418Z]  tsc_hz: 2200000000 (cyc)
00:07:46.842  
[2024-11-17T17:28:33.418Z]  ======================================
00:07:46.842  
[2024-11-17T17:28:33.418Z]  poller_cost: 436 (cyc), 198 (nsec)
00:07:47.101  
00:07:47.101  real	0m1.245s
00:07:47.101  user	0m1.118s
00:07:47.101  sys	0m0.121s
00:07:47.101   18:28:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:47.101   18:28:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:47.101  ************************************
00:07:47.101  END TEST thread_poller_perf
00:07:47.101  ************************************
00:07:47.101   18:28:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:07:47.101  
00:07:47.101  real	0m2.721s
00:07:47.101  user	0m2.362s
00:07:47.101  sys	0m0.362s
00:07:47.101   18:28:33 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:47.101   18:28:33 thread -- common/autotest_common.sh@10 -- # set +x
00:07:47.101  ************************************
00:07:47.101  END TEST thread
00:07:47.101  ************************************
00:07:47.101   18:28:33  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:07:47.101   18:28:33  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:07:47.101   18:28:33  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:47.101   18:28:33  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:47.101   18:28:33  -- common/autotest_common.sh@10 -- # set +x
00:07:47.101  ************************************
00:07:47.101  START TEST app_cmdline
00:07:47.101  ************************************
00:07:47.101   18:28:33 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:07:47.101  * Looking for test storage...
00:07:47.101  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:07:47.101    18:28:33 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:47.101     18:28:33 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version
00:07:47.101     18:28:33 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:47.101    18:28:33 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:07:47.101    18:28:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@345 -- # : 1
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:47.102     18:28:33 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:47.102    18:28:33 app_cmdline -- scripts/common.sh@368 -- # return 0
00:07:47.102    18:28:33 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:47.102    18:28:33 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:47.102  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:47.102  		--rc genhtml_branch_coverage=1
00:07:47.102  		--rc genhtml_function_coverage=1
00:07:47.102  		--rc genhtml_legend=1
00:07:47.102  		--rc geninfo_all_blocks=1
00:07:47.102  		--rc geninfo_unexecuted_blocks=1
00:07:47.102  		
00:07:47.102  		'
00:07:47.102    18:28:33 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:47.102  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:47.102  		--rc genhtml_branch_coverage=1
00:07:47.102  		--rc genhtml_function_coverage=1
00:07:47.102  		--rc genhtml_legend=1
00:07:47.102  		--rc geninfo_all_blocks=1
00:07:47.102  		--rc geninfo_unexecuted_blocks=1
00:07:47.102  		
00:07:47.102  		'
00:07:47.102    18:28:33 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:47.102  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:47.102  		--rc genhtml_branch_coverage=1
00:07:47.102  		--rc genhtml_function_coverage=1
00:07:47.102  		--rc genhtml_legend=1
00:07:47.102  		--rc geninfo_all_blocks=1
00:07:47.102  		--rc geninfo_unexecuted_blocks=1
00:07:47.102  		
00:07:47.102  		'
00:07:47.102    18:28:33 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:47.102  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:47.102  		--rc genhtml_branch_coverage=1
00:07:47.102  		--rc genhtml_function_coverage=1
00:07:47.102  		--rc genhtml_legend=1
00:07:47.102  		--rc geninfo_all_blocks=1
00:07:47.102  		--rc geninfo_unexecuted_blocks=1
00:07:47.102  		
00:07:47.102  		'
00:07:47.102   18:28:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:07:47.102   18:28:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=394919
00:07:47.102   18:28:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:07:47.102   18:28:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 394919
00:07:47.102   18:28:33 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 394919 ']'
00:07:47.102   18:28:33 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:47.102   18:28:33 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:47.102   18:28:33 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:47.102  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:47.102   18:28:33 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:47.102   18:28:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:47.361  [2024-11-17 18:28:33.707141] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:47.361  [2024-11-17 18:28:33.707282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394919 ]
00:07:47.361  [2024-11-17 18:28:33.814729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:47.361  [2024-11-17 18:28:33.849860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:07:48.300  {
00:07:48.300    "version": "SPDK v25.01-pre git sha1 83e8405e4",
00:07:48.300    "fields": {
00:07:48.300      "major": 25,
00:07:48.300      "minor": 1,
00:07:48.300      "patch": 0,
00:07:48.300      "suffix": "-pre",
00:07:48.300      "commit": "83e8405e4"
00:07:48.300    }
00:07:48.300  }
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:07:48.300    18:28:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:07:48.300    18:28:34 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:48.300    18:28:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:07:48.300    18:28:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:48.300    18:28:34 app_cmdline -- app/cmdline.sh@26 -- # sort
00:07:48.300    18:28:34 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:07:48.300   18:28:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:48.300    18:28:34 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:48.300    18:28:34 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:07:48.300   18:28:34 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:48.560  request:
00:07:48.560  {
00:07:48.560    "method": "env_dpdk_get_mem_stats",
00:07:48.560    "req_id": 1
00:07:48.560  }
00:07:48.560  Got JSON-RPC error response
00:07:48.560  response:
00:07:48.560  {
00:07:48.560    "code": -32601,
00:07:48.560    "message": "Method not found"
00:07:48.560  }
00:07:48.560   18:28:34 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:07:48.560   18:28:34 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:48.560   18:28:34 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:48.560   18:28:34 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:48.560   18:28:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 394919
00:07:48.560   18:28:34 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 394919 ']'
00:07:48.560   18:28:34 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 394919
00:07:48.560    18:28:34 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:07:48.560   18:28:34 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:48.560    18:28:34 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394919
00:07:48.560   18:28:35 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:48.560   18:28:35 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:48.560   18:28:35 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394919'
00:07:48.560  killing process with pid 394919
00:07:48.560   18:28:35 app_cmdline -- common/autotest_common.sh@973 -- # kill 394919
00:07:48.560   18:28:35 app_cmdline -- common/autotest_common.sh@978 -- # wait 394919
00:07:49.128  
00:07:49.128  real	0m1.961s
00:07:49.128  user	0m2.275s
00:07:49.128  sys	0m0.568s
00:07:49.128   18:28:35 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.128   18:28:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:49.128  ************************************
00:07:49.128  END TEST app_cmdline
00:07:49.128  ************************************
00:07:49.128   18:28:35  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:07:49.128   18:28:35  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:49.128   18:28:35  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.128   18:28:35  -- common/autotest_common.sh@10 -- # set +x
00:07:49.128  ************************************
00:07:49.128  START TEST version
00:07:49.128  ************************************
00:07:49.128   18:28:35 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:07:49.128  * Looking for test storage...
00:07:49.128  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:07:49.128    18:28:35 version -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:49.128     18:28:35 version -- common/autotest_common.sh@1693 -- # lcov --version
00:07:49.128     18:28:35 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:49.128    18:28:35 version -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:49.128    18:28:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:49.128    18:28:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:49.128    18:28:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:49.128    18:28:35 version -- scripts/common.sh@336 -- # IFS=.-:
00:07:49.128    18:28:35 version -- scripts/common.sh@336 -- # read -ra ver1
00:07:49.128    18:28:35 version -- scripts/common.sh@337 -- # IFS=.-:
00:07:49.128    18:28:35 version -- scripts/common.sh@337 -- # read -ra ver2
00:07:49.128    18:28:35 version -- scripts/common.sh@338 -- # local 'op=<'
00:07:49.128    18:28:35 version -- scripts/common.sh@340 -- # ver1_l=2
00:07:49.128    18:28:35 version -- scripts/common.sh@341 -- # ver2_l=1
00:07:49.128    18:28:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:49.128    18:28:35 version -- scripts/common.sh@344 -- # case "$op" in
00:07:49.129    18:28:35 version -- scripts/common.sh@345 -- # : 1
00:07:49.129    18:28:35 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:49.129    18:28:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:49.129     18:28:35 version -- scripts/common.sh@365 -- # decimal 1
00:07:49.129     18:28:35 version -- scripts/common.sh@353 -- # local d=1
00:07:49.129     18:28:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:49.129     18:28:35 version -- scripts/common.sh@355 -- # echo 1
00:07:49.129    18:28:35 version -- scripts/common.sh@365 -- # ver1[v]=1
00:07:49.129     18:28:35 version -- scripts/common.sh@366 -- # decimal 2
00:07:49.129     18:28:35 version -- scripts/common.sh@353 -- # local d=2
00:07:49.129     18:28:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:49.129     18:28:35 version -- scripts/common.sh@355 -- # echo 2
00:07:49.129    18:28:35 version -- scripts/common.sh@366 -- # ver2[v]=2
00:07:49.129    18:28:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:49.129    18:28:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:49.129    18:28:35 version -- scripts/common.sh@368 -- # return 0
00:07:49.129    18:28:35 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:49.129    18:28:35 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:49.129  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.129  		--rc genhtml_branch_coverage=1
00:07:49.129  		--rc genhtml_function_coverage=1
00:07:49.129  		--rc genhtml_legend=1
00:07:49.129  		--rc geninfo_all_blocks=1
00:07:49.129  		--rc geninfo_unexecuted_blocks=1
00:07:49.129  		
00:07:49.129  		'
00:07:49.129    18:28:35 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:49.129  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.129  		--rc genhtml_branch_coverage=1
00:07:49.129  		--rc genhtml_function_coverage=1
00:07:49.129  		--rc genhtml_legend=1
00:07:49.129  		--rc geninfo_all_blocks=1
00:07:49.129  		--rc geninfo_unexecuted_blocks=1
00:07:49.129  		
00:07:49.129  		'
00:07:49.129    18:28:35 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:49.129  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.129  		--rc genhtml_branch_coverage=1
00:07:49.129  		--rc genhtml_function_coverage=1
00:07:49.129  		--rc genhtml_legend=1
00:07:49.129  		--rc geninfo_all_blocks=1
00:07:49.129  		--rc geninfo_unexecuted_blocks=1
00:07:49.129  		
00:07:49.129  		'
00:07:49.129    18:28:35 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:49.129  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.129  		--rc genhtml_branch_coverage=1
00:07:49.129  		--rc genhtml_function_coverage=1
00:07:49.129  		--rc genhtml_legend=1
00:07:49.129  		--rc geninfo_all_blocks=1
00:07:49.129  		--rc geninfo_unexecuted_blocks=1
00:07:49.129  		
00:07:49.129  		'
00:07:49.129    18:28:35 version -- app/version.sh@17 -- # get_header_version major
00:07:49.129    18:28:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # cut -f2
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # tr -d '"'
00:07:49.129   18:28:35 version -- app/version.sh@17 -- # major=25
00:07:49.129    18:28:35 version -- app/version.sh@18 -- # get_header_version minor
00:07:49.129    18:28:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # cut -f2
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # tr -d '"'
00:07:49.129   18:28:35 version -- app/version.sh@18 -- # minor=1
00:07:49.129    18:28:35 version -- app/version.sh@19 -- # get_header_version patch
00:07:49.129    18:28:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # cut -f2
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # tr -d '"'
00:07:49.129   18:28:35 version -- app/version.sh@19 -- # patch=0
00:07:49.129    18:28:35 version -- app/version.sh@20 -- # get_header_version suffix
00:07:49.129    18:28:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # cut -f2
00:07:49.129    18:28:35 version -- app/version.sh@14 -- # tr -d '"'
00:07:49.129   18:28:35 version -- app/version.sh@20 -- # suffix=-pre
00:07:49.129   18:28:35 version -- app/version.sh@22 -- # version=25.1
00:07:49.129   18:28:35 version -- app/version.sh@25 -- # (( patch != 0 ))
00:07:49.129   18:28:35 version -- app/version.sh@28 -- # version=25.1rc0
00:07:49.129   18:28:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python
00:07:49.129    18:28:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:07:49.129   18:28:35 version -- app/version.sh@30 -- # py_version=25.1rc0
00:07:49.129   18:28:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:07:49.129  
00:07:49.129  real	0m0.173s
00:07:49.129  user	0m0.113s
00:07:49.129  sys	0m0.086s
00:07:49.129   18:28:35 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.129   18:28:35 version -- common/autotest_common.sh@10 -- # set +x
00:07:49.129  ************************************
00:07:49.129  END TEST version
00:07:49.129  ************************************
00:07:49.129   18:28:35  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:07:49.129   18:28:35  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:07:49.129    18:28:35  -- spdk/autotest.sh@194 -- # uname -s
00:07:49.129   18:28:35  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:07:49.129   18:28:35  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:49.129   18:28:35  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:49.129   18:28:35  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:07:49.129   18:28:35  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:07:49.129   18:28:35  -- spdk/autotest.sh@260 -- # timing_exit lib
00:07:49.129   18:28:35  -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:49.129   18:28:35  -- common/autotest_common.sh@10 -- # set +x
00:07:49.390   18:28:35  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:07:49.390   18:28:35  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:07:49.390   18:28:35  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:07:49.390   18:28:35  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:07:49.390   18:28:35  -- spdk/autotest.sh@315 -- # '[' 1 -eq 1 ']'
00:07:49.390   18:28:35  -- spdk/autotest.sh@316 -- # HUGENODE=0
00:07:49.390   18:28:35  -- spdk/autotest.sh@316 -- # run_test vfio_user_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:07:49.390   18:28:35  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:49.390   18:28:35  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.390   18:28:35  -- common/autotest_common.sh@10 -- # set +x
00:07:49.390  ************************************
00:07:49.390  START TEST vfio_user_qemu
00:07:49.390  ************************************
00:07:49.390   18:28:35 vfio_user_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:07:49.390  * Looking for test storage...
00:07:49.390  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:07:49.390    18:28:35 vfio_user_qemu -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:49.390     18:28:35 vfio_user_qemu -- common/autotest_common.sh@1693 -- # lcov --version
00:07:49.390     18:28:35 vfio_user_qemu -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:49.390    18:28:35 vfio_user_qemu -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@344 -- # case "$op" in
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@345 -- # : 1
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@365 -- # decimal 1
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@353 -- # local d=1
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@355 -- # echo 1
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@366 -- # decimal 2
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@353 -- # local d=2
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:49.390     18:28:35 vfio_user_qemu -- scripts/common.sh@355 -- # echo 2
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:49.390    18:28:35 vfio_user_qemu -- scripts/common.sh@368 -- # return 0
00:07:49.390    18:28:35 vfio_user_qemu -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:49.390    18:28:35 vfio_user_qemu -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:49.390  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.390  		--rc genhtml_branch_coverage=1
00:07:49.390  		--rc genhtml_function_coverage=1
00:07:49.390  		--rc genhtml_legend=1
00:07:49.390  		--rc geninfo_all_blocks=1
00:07:49.390  		--rc geninfo_unexecuted_blocks=1
00:07:49.390  		
00:07:49.390  		'
00:07:49.390    18:28:35 vfio_user_qemu -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:49.390  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.390  		--rc genhtml_branch_coverage=1
00:07:49.390  		--rc genhtml_function_coverage=1
00:07:49.390  		--rc genhtml_legend=1
00:07:49.390  		--rc geninfo_all_blocks=1
00:07:49.390  		--rc geninfo_unexecuted_blocks=1
00:07:49.390  		
00:07:49.390  		'
00:07:49.390    18:28:35 vfio_user_qemu -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:49.390  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.390  		--rc genhtml_branch_coverage=1
00:07:49.390  		--rc genhtml_function_coverage=1
00:07:49.390  		--rc genhtml_legend=1
00:07:49.390  		--rc geninfo_all_blocks=1
00:07:49.390  		--rc geninfo_unexecuted_blocks=1
00:07:49.390  		
00:07:49.390  		'
00:07:49.390    18:28:35 vfio_user_qemu -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:49.390  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.390  		--rc genhtml_branch_coverage=1
00:07:49.390  		--rc genhtml_function_coverage=1
00:07:49.390  		--rc genhtml_legend=1
00:07:49.390  		--rc geninfo_all_blocks=1
00:07:49.390  		--rc geninfo_unexecuted_blocks=1
00:07:49.390  		
00:07:49.390  		'
00:07:49.390   18:28:35 vfio_user_qemu -- vfio_user/vfio_user.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:07:49.390    18:28:35 vfio_user_qemu -- vfio_user/common.sh@6 -- # : 128
00:07:49.390    18:28:35 vfio_user_qemu -- vfio_user/common.sh@7 -- # : 512
00:07:49.390    18:28:35 vfio_user_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@6 -- # : false
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@9 -- # : qemu-img
00:07:49.390      18:28:35 vfio_user_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:07:49.390       18:28:35 vfio_user_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh
00:07:49.390      18:28:35 vfio_user_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:07:49.390     18:28:35 vfio_user_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:07:49.390      18:28:35 vfio_user_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:07:49.391      18:28:35 vfio_user_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:07:49.391     18:28:35 vfio_user_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:07:49.391      18:28:35 vfio_user_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:07:49.391      18:28:35 vfio_user_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:07:49.391      18:28:35 vfio_user_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:07:49.391      18:28:35 vfio_user_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:07:49.391      18:28:35 vfio_user_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:07:49.391      18:28:35 vfio_user_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:07:49.391       18:28:35 vfio_user_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:07:49.391        18:28:35 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:07:49.391        18:28:35 vfio_user_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:07:49.391        18:28:35 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:07:49.391        18:28:35 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:07:49.391       18:28:35 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:07:49.391    18:28:35 vfio_user_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:49.391    18:28:35 vfio_user_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:07:49.391    18:28:35 vfio_user_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:49.391   18:28:35 vfio_user_qemu -- vfio_user/vfio_user.sh@11 -- # echo 'Running SPDK vfio-user fio autotest...'
00:07:49.391  Running SPDK vfio-user fio autotest...
00:07:49.391   18:28:35 vfio_user_qemu -- vfio_user/vfio_user.sh@13 -- # vhosttestinit
00:07:49.391   18:28:35 vfio_user_qemu -- vhost/common.sh@37 -- # '[' iso == iso ']'
00:07:49.391   18:28:35 vfio_user_qemu -- vhost/common.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:07:50.770  0000:00:04.7 (8086 6f27): Already using the vfio-pci driver
00:07:50.770  0000:00:04.6 (8086 6f26): Already using the vfio-pci driver
00:07:50.770  0000:00:04.5 (8086 6f25): Already using the vfio-pci driver
00:07:50.770  0000:00:04.4 (8086 6f24): Already using the vfio-pci driver
00:07:50.770  0000:00:04.3 (8086 6f23): Already using the vfio-pci driver
00:07:50.770  0000:00:04.2 (8086 6f22): Already using the vfio-pci driver
00:07:50.770  0000:00:04.1 (8086 6f21): Already using the vfio-pci driver
00:07:50.770  0000:00:04.0 (8086 6f20): Already using the vfio-pci driver
00:07:50.770  0000:80:04.7 (8086 6f27): Already using the vfio-pci driver
00:07:50.770  0000:80:04.6 (8086 6f26): Already using the vfio-pci driver
00:07:50.770  0000:80:04.5 (8086 6f25): Already using the vfio-pci driver
00:07:50.770  0000:80:04.4 (8086 6f24): Already using the vfio-pci driver
00:07:50.770  0000:80:04.3 (8086 6f23): Already using the vfio-pci driver
00:07:50.770  0000:80:04.2 (8086 6f22): Already using the vfio-pci driver
00:07:50.770  0000:80:04.1 (8086 6f21): Already using the vfio-pci driver
00:07:50.770  0000:80:04.0 (8086 6f20): Already using the vfio-pci driver
00:07:50.770  0000:0d:00.0 (8086 0a54): Already using the vfio-pci driver
00:07:50.770   18:28:37 vfio_user_qemu -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:07:50.770   18:28:37 vfio_user_qemu -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:50.770   18:28:37 vfio_user_qemu -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:50.770   18:28:37 vfio_user_qemu -- vfio_user/vfio_user.sh@15 -- # run_test vfio_user_nvme_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:50.770   18:28:37 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:50.770   18:28:37 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:50.770   18:28:37 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:07:50.770  ************************************
00:07:50.770  START TEST vfio_user_nvme_fio
00:07:50.770  ************************************
00:07:50.770   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:50.770  * Looking for test storage...
00:07:50.770  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1693 -- # lcov --version
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # IFS=.-:
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # read -ra ver1
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # IFS=.-:
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # read -ra ver2
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@338 -- # local 'op=<'
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@340 -- # ver1_l=2
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@341 -- # ver2_l=1
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@344 -- # case "$op" in
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@345 -- # : 1
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # decimal 1
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=1
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 1
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # decimal 2
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=2
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 2
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # return 0
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:50.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.770  		--rc genhtml_branch_coverage=1
00:07:50.770  		--rc genhtml_function_coverage=1
00:07:50.770  		--rc genhtml_legend=1
00:07:50.770  		--rc geninfo_all_blocks=1
00:07:50.770  		--rc geninfo_unexecuted_blocks=1
00:07:50.770  		
00:07:50.770  		'
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:50.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.770  		--rc genhtml_branch_coverage=1
00:07:50.770  		--rc genhtml_function_coverage=1
00:07:50.770  		--rc genhtml_legend=1
00:07:50.770  		--rc geninfo_all_blocks=1
00:07:50.770  		--rc geninfo_unexecuted_blocks=1
00:07:50.770  		
00:07:50.770  		'
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:50.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.770  		--rc genhtml_branch_coverage=1
00:07:50.770  		--rc genhtml_function_coverage=1
00:07:50.770  		--rc genhtml_legend=1
00:07:50.770  		--rc geninfo_all_blocks=1
00:07:50.770  		--rc geninfo_unexecuted_blocks=1
00:07:50.770  		
00:07:50.770  		'
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:50.770  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:50.770  		--rc genhtml_branch_coverage=1
00:07:50.770  		--rc genhtml_function_coverage=1
00:07:50.770  		--rc genhtml_legend=1
00:07:50.770  		--rc geninfo_all_blocks=1
00:07:50.770  		--rc geninfo_unexecuted_blocks=1
00:07:50.770  		
00:07:50.770  		'
00:07:50.770   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@6 -- # : 128
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@7 -- # : 512
00:07:50.770    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@6 -- # : false
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@9 -- # : qemu-img
00:07:50.770      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:07:50.770     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:07:50.771       18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:07:50.771     18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:07:50.771      18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:07:50.771       18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:07:50.771        18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:07:50.771        18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:07:50.771        18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:07:50.771        18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:07:50.771       18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # get_vhost_dir 0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@15 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@16 -- # vm_no=2
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@18 -- # trap clean_vfio_user EXIT
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@19 -- # vhosttestinit
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@21 -- # timing_enter start_vfio_user
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@22 -- # vfio_user_run 0
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@11 -- # local vhost_name=0
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # get_vhost_dir 0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:07:50.771    18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:50.771   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@22 -- # nvmfpid=396347
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@23 -- # echo 396347
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@25 -- # echo 'Process pid: 396347'
00:07:51.030  Process pid: 396347
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:07:51.030  waiting for app to run...
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@27 -- # waitforlisten 396347 /root/vhost_test/vhost/0/rpc.sock
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@835 -- # '[' -z 396347 ']'
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:07:51.030  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:51.030   18:28:37 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:51.030  [2024-11-17 18:28:37.429104] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:07:51.030  [2024-11-17 18:28:37.429263] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396347 ]
00:07:51.030  EAL: No free 2048 kB hugepages reported on node 1
00:07:51.288  [2024-11-17 18:28:37.760987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:51.288  [2024-11-17 18:28:37.807339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:51.288  [2024-11-17 18:28:37.807420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:51.288  [2024-11-17 18:28:37.807424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:51.288  [2024-11-17 18:28:37.807473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:51.855   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:51.855   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@868 -- # return 0
00:07:51.855   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:52.114    18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # seq 0 2
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/0/muser
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/0/muser/domain/muser0/0
00:07:52.114   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode0 -s SPDK000 -a
00:07:52.373   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:52.373   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc0
00:07:52.373  Malloc0
00:07:52.373   18:28:38 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode0 Malloc0
00:07:52.632   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode0 -t VFIOUSER -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:07:52.890   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:52.890   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:07:52.890   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/1/muser
00:07:52.890   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:07:52.890   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:07:53.148   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:53.148   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc1
00:07:53.407  Malloc1
00:07:53.407   18:28:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:07:53.666   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:07:53.666   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:53.666   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:07:53.666   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/2/muser
00:07:53.666   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/2/muser/domain/muser2/2
00:07:53.666   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -s SPDK002 -a
00:07:53.925   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:53.925   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:07:53.925   18:28:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:07:57.212   18:28:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@35 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Nvme0n1
00:07:57.212   18:28:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:07:57.471   18:28:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@43 -- # timing_exit start_vfio_user
00:07:57.471   18:28:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:57.472   18:28:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@45 -- # used_vms=
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@46 -- # timing_enter launch_vms
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:57.472    18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # seq 0 2
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=0
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:57.472  WARN: removing existing VM in '/root/vhost_test/vms/0'
00:07:57.472  INFO: Creating new VM in /root/vhost_test/vms/0
00:07:57.472  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:57.472  INFO: TASK MASK: 4-5
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:57.472  INFO: NUMA NODE: 0
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:57.472   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:57.734  INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 0 == '' ]]
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:07:57.734  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:57.734    18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 4-5 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/0/muser/domain/muser0/0/cntrl
00:07:57.734   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10000
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10001
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10002
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10004
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 100
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 0'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=1
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:57.735  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:07:57.735  INFO: Creating new VM in /root/vhost_test/vms/1
00:07:57.735  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:57.735  INFO: TASK MASK: 6-7
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:57.735  INFO: NUMA NODE: 0
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:57.735  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:07:57.735  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:57.735    18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10100
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10101
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10102
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10104
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 101
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 1'
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=2 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=2
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:57.735  WARN: removing existing VM in '/root/vhost_test/vms/2'
00:07:57.735  INFO: Creating new VM in /root/vhost_test/vms/2
00:07:57.735  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:57.735  INFO: TASK MASK: 8-9
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:57.735   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:57.736  INFO: NUMA NODE: 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:57.736  INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 2 == '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/2/run.sh'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/2/run.sh'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/2/run.sh'
00:07:57.736  INFO: Saving to /root/vhost_test/vms/2/run.sh
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:57.736    18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 8-9 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :102 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10202,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/2/qemu.pid -serial file:/root/vhost_test/vms/2/serial.log -D /root/vhost_test/vms/2/qemu.log -chardev file,path=/root/vhost_test/vms/2/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10200-:22,hostfwd=tcp::10201-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/2/muser/domain/muser2/2/cntrl
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/2/run.sh
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10200
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10201
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10202
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/2/migration_port
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10204
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 102
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 2'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@52 -- # vm_run 0 1 2
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@843 -- # local run_all=false
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@856 -- # false
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # shift 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/2/run.sh ]]
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 2'
00:07:57.736   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 0
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:07:57.737  INFO: running /root/vhost_test/vms/0/run.sh
00:07:57.737   18:28:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:07:57.737  Running VM in /root/vhost_test/vms/0
00:07:57.996  Waiting for QEMU pid file
00:07:58.256  [2024-11-17 18:28:44.722587] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:07:59.227  === qemu.log ===
00:07:59.227  === qemu.log ===
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:07:59.227  INFO: running /root/vhost_test/vms/1/run.sh
00:07:59.227   18:28:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:07:59.227  Running VM in /root/vhost_test/vms/1
00:07:59.551  Waiting for QEMU pid file
00:07:59.862  [2024-11-17 18:28:46.144934] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:08:00.537  === qemu.log ===
00:08:00.537  === qemu.log ===
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 2
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/2/run.sh'
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/2/run.sh'
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/2/run.sh'
00:08:00.537  INFO: running /root/vhost_test/vms/2/run.sh
00:08:00.537   18:28:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/2/run.sh
00:08:00.537  Running VM in /root/vhost_test/vms/2
00:08:00.862  Waiting for QEMU pid file
00:08:01.139  [2024-11-17 18:28:47.463164] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:08:01.707  === qemu.log ===
00:08:01.707  === qemu.log ===
00:08:01.707   18:28:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@53 -- # vm_wait_for_boot 60 0 1 2
00:08:01.707   18:28:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@913 -- # assert_number 60
00:08:01.707   18:28:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:08:01.707   18:28:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # return 0
00:08:01.707   18:28:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@915 -- # xtrace_disable
00:08:01.707   18:28:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:01.707  INFO: Waiting for VMs to boot
00:08:01.707  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:08:16.594  [2024-11-17 18:29:01.496884] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:16.594  [2024-11-17 18:29:01.505920] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:16.594  [2024-11-17 18:29:01.509957] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:08:16.594  [2024-11-17 18:29:02.050337] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:08:16.594  [2024-11-17 18:29:02.065430] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:08:16.594  [2024-11-17 18:29:02.069456] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:08:17.531  [2024-11-17 18:29:03.784062] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:08:17.531  [2024-11-17 18:29:03.793091] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:08:17.531  [2024-11-17 18:29:03.797119] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:08:24.106  
00:08:24.106  INFO: VM0 ready
00:08:24.106  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:24.106  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:24.675  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:08:25.612  
00:08:25.612  INFO: VM1 ready
00:08:25.612  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:25.612  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:26.549  INFO: waiting for VM2 (/root/vhost_test/vms/2)
00:08:27.117  
00:08:27.117  INFO: VM2 ready
00:08:27.377  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:27.377  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:28.755  INFO: all VMs ready
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # return 0
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@55 -- # timing_exit launch_vms
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@57 -- # timing_enter run_vm_cmd
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@59 -- # fio_disks=
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_0_qemu_mask
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-0-4-5
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 0 'hostname VM-0-4-5'
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:28.755    18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:28.755    18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:28.755    18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:28.755    18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:28.755    18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:28.755    18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:28.755   18:29:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'hostname VM-0-4-5'
00:08:28.755  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 0
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM0'
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM0'
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM0'
00:08:28.755  INFO: Starting fio server on VM0
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 0 'cat > /root/fio; chmod +x /root/fio'
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:28.755    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:28.755    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:28.755    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:28.755    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:28.755    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:28.755    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:28.755   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:28.755  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:29.014   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 0 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:29.014   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:29.014   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:29.014   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.014   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:29.014   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:29.014    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:29.014    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:29.014    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:29.014    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.014    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:29.014    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:29.014   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:29.015  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:29.273   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 0
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 0 'grep -l SPDK /sys/class/nvme/*/model'
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:29.273     18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:29.273     18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:29.273     18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:29.273     18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.273     18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:29.273     18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:29.273    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:29.273  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:29.532    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=0:/dev/nvme0n1'
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_1_qemu_mask
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-1-6-7
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 1 'hostname VM-1-6-7'
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:29.532   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.533   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:29.533   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:29.533    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:29.533    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:29.533    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:29.533    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.533    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:29.533    18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:29.533   18:29:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:08:29.533  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:08:29.792  INFO: Starting fio server on VM1
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:29.792    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:29.792    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:29.792    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:29.792    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.792    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:29.792    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:29.792   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:29.792  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:30.051   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:30.051   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:30.051   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:30.051   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.051   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:30.051   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:30.051    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:30.051    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:30.051    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:30.051    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.051    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:30.051    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:30.051   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:30.051  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:30.310   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 1
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:30.310     18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:30.310     18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:30.310     18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:30.310     18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.310     18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:30.310     18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:30.310    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:30.310  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:30.569    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=1:/dev/nvme0n1'
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_2_qemu_mask
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-2-8-9
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 2 'hostname VM-2-8-9'
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:30.569    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:30.569    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:30.569    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:30.569    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.569    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:30.569    18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:30.569   18:29:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'hostname VM-2-8-9'
00:08:30.570  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 2
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:30.828   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM2'
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM2'
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM2'
00:08:30.829  INFO: Starting fio server on VM2
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 2 'cat > /root/fio; chmod +x /root/fio'
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:30.829    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:30.829    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:30.829    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:30.829    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:30.829    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:30.829    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:30.829   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:30.829  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:31.087   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 2 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:31.087   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:31.087   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:31.087   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.087   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:31.087   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:31.087    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:31.087    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:31.087    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:31.087    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.088    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:31.088    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:31.088   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:31.088  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 2
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 2 'grep -l SPDK /sys/class/nvme/*/model'
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:31.347     18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:31.347     18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:31.347     18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:31.347     18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.347     18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:31.347     18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:31.347  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=2:/dev/nvme0n1'
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@72 -- # job_file=default_integrity.job
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@73 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=0:/dev/nvme0n1 --vm=1:/dev/nvme0n1 --vm=2:/dev/nvme0n1
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1053 -- # local arg
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1054 -- # local job_file=
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # vms=()
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # local vms
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1057 -- # local out=
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1058 -- # local vm
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local job_fname
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=0
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=0)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 0 'cat > /root/default_integrity.job'
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:31.347   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:31.347    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:31.348    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:31.348    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:31.348    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.348    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:31.348    18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:31.348   18:29:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/default_integrity.job'
00:08:31.606  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:31.606   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:31.606   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 0 cat /root/default_integrity.job
00:08:31.606   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:31.606   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:31.606   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.606   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:31.606   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:31.606    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:31.606    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:31.606    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:31.606    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.606    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:31.606    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:31.607   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 cat /root/default_integrity.job
00:08:31.607  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:31.865  [global]
00:08:31.865  blocksize_range=4k-512k
00:08:31.865  iodepth=512
00:08:31.865  iodepth_batch=128
00:08:31.865  iodepth_low=256
00:08:31.865  ioengine=libaio
00:08:31.865  size=1G
00:08:31.865  io_size=4G
00:08:31.865  filename=/dev/nvme0n1
00:08:31.865  group_reporting
00:08:31.865  thread
00:08:31.865  numjobs=1
00:08:31.865  direct=1
00:08:31.865  rw=randwrite
00:08:31.865  do_verify=1
00:08:31.865  verify=md5
00:08:31.865  verify_backlog=1024
00:08:31.866  fsync_on_close=1
00:08:31.866  verify_state_save=0
00:08:31.866  [nvme-host]
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 0
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 0
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/0
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/0/fio_socket
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10001 --remote-config /root/default_integrity.job '
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:31.866    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:31.866   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:08:31.866  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:32.125    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:32.125    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:32.125    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:32.125    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.125    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:32.125    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:32.125   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:08:32.125  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:32.383  [global]
00:08:32.383  blocksize_range=4k-512k
00:08:32.383  iodepth=512
00:08:32.383  iodepth_batch=128
00:08:32.383  iodepth_low=256
00:08:32.383  ioengine=libaio
00:08:32.383  size=1G
00:08:32.383  io_size=4G
00:08:32.383  filename=/dev/nvme0n1
00:08:32.383  group_reporting
00:08:32.383  thread
00:08:32.383  numjobs=1
00:08:32.383  direct=1
00:08:32.383  rw=randwrite
00:08:32.383  do_verify=1
00:08:32.383  verify=md5
00:08:32.383  verify_backlog=1024
00:08:32.383  fsync_on_close=1
00:08:32.383  verify_state_save=0
00:08:32.383  [nvme-host]
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:32.383    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:08:32.383    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:08:32.383    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:32.383    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.383    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:08:32.383    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=2
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=2)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 2 'cat > /root/default_integrity.job'
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:32.383   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:32.384    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:32.384    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:32.384    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:32.384    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.384    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:32.384    18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:32.384   18:29:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/default_integrity.job'
00:08:32.384  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 2 cat /root/default_integrity.job
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:32.643    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:32.643    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:32.643    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:32.643    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.643    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:32.643    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:32.643   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 cat /root/default_integrity.job
00:08:32.643  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:32.901  [global]
00:08:32.901  blocksize_range=4k-512k
00:08:32.901  iodepth=512
00:08:32.901  iodepth_batch=128
00:08:32.901  iodepth_low=256
00:08:32.901  ioengine=libaio
00:08:32.901  size=1G
00:08:32.901  io_size=4G
00:08:32.901  filename=/dev/nvme0n1
00:08:32.901  group_reporting
00:08:32.901  thread
00:08:32.901  numjobs=1
00:08:32.901  direct=1
00:08:32.901  rw=randwrite
00:08:32.901  do_verify=1
00:08:32.901  verify=md5
00:08:32.901  verify_backlog=1024
00:08:32.901  fsync_on_close=1
00:08:32.901  verify_state_save=0
00:08:32.901  [nvme-host]
00:08:32.901   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:32.901    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 2
00:08:32.901    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 2
00:08:32.901    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:32.901    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:32.901    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/2
00:08:32.901    18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/2/fio_socket
00:08:32.901   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10201 --remote-config /root/default_integrity.job '
00:08:32.901   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:32.901   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1147 -- # true
00:08:32.901   18:29:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10001 --remote-config /root/default_integrity.job --client=127.0.0.1,10101 --remote-config /root/default_integrity.job --client=127.0.0.1,10201 --remote-config /root/default_integrity.job
00:08:47.788   18:29:34 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1162 -- # sleep 1
00:08:49.168   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:08:49.168   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:08:49.168   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:08:49.168  hostname=VM-2-8-9, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:49.168  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:49.168  hostname=VM-0-4-5, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:49.168  <VM-2-8-9> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:49.168  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:49.168  <VM-0-4-5> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:49.168  <VM-0-4-5> Starting 1 thread
00:08:49.168  <VM-1-6-7> Starting 1 thread
00:08:49.168  <VM-2-8-9> Starting 1 thread
00:08:49.168  <VM-2-8-9> 
00:08:49.168  nvme-host: (groupid=0, jobs=1): err= 0: pid=945: Sun Nov 17 18:29:32 2024
00:08:49.168    read: IOPS=1049, BW=176MiB/s (185MB/s)(2048MiB/11632msec)
00:08:49.168      slat (usec): min=48, max=35272, avg=9619.27, stdev=6605.07
00:08:49.168      clat (msec): min=5, max=424, avg=172.57, stdev=90.31
00:08:49.168       lat (msec): min=8, max=440, avg=182.19, stdev=91.15
00:08:49.168      clat percentiles (msec):
00:08:49.168       |  1.00th=[    9],  5.00th=[   28], 10.00th=[   62], 20.00th=[   94],
00:08:49.168       | 30.00th=[  121], 40.00th=[  142], 50.00th=[  165], 60.00th=[  188],
00:08:49.168       | 70.00th=[  213], 80.00th=[  247], 90.00th=[  309], 95.00th=[  342],
00:08:49.168       | 99.00th=[  384], 99.50th=[  397], 99.90th=[  418], 99.95th=[  422],
00:08:49.168       | 99.99th=[  426]
00:08:49.168    write: IOPS=1110, BW=186MiB/s (195MB/s)(2048MiB/10998msec); 0 zone resets
00:08:49.168      slat (usec): min=277, max=120117, avg=28714.29, stdev=19206.94
00:08:49.168      clat (msec): min=7, max=352, avg=146.50, stdev=78.68
00:08:49.168       lat (msec): min=8, max=394, avg=175.22, stdev=84.62
00:08:49.168      clat percentiles (msec):
00:08:49.168       |  1.00th=[   15],  5.00th=[   31], 10.00th=[   51], 20.00th=[   77],
00:08:49.168       | 30.00th=[  100], 40.00th=[  116], 50.00th=[  138], 60.00th=[  155],
00:08:49.168       | 70.00th=[  182], 80.00th=[  215], 90.00th=[  264], 95.00th=[  296],
00:08:49.168       | 99.00th=[  338], 99.50th=[  355], 99.90th=[  355], 99.95th=[  355],
00:08:49.168       | 99.99th=[  355]
00:08:49.168     bw (  KiB/s): min=12616, max=394632, per=100.00%, avg=220739.16, stdev=133192.02, samples=19
00:08:49.168     iops        : min=   60, max= 2048, avg=1284.95, stdev=784.37, samples=19
00:08:49.168    lat (msec)   : 10=1.47%, 20=1.70%, 50=5.60%, 100=17.31%, 250=58.23%
00:08:49.168    lat (msec)   : 500=15.68%
00:08:49.168    cpu          : usr=85.18%, sys=2.06%, ctx=813, majf=0, minf=34
00:08:49.168    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:08:49.168       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:08:49.168       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:49.168       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:49.168       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:49.168  
00:08:49.168  Run status group 0 (all jobs):
00:08:49.168     READ: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=2048MiB (2147MB), run=11632-11632msec
00:08:49.168    WRITE: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=2048MiB (2147MB), run=10998-10998msec
00:08:49.168  
00:08:49.168  Disk stats (read/write):
00:08:49.168    nvme0n1: ios=5/0, merge=0/0, ticks=0/0, in_queue=0, util=22.84%
00:08:49.168  <VM-0-4-5> 
00:08:49.168  nvme-host: (groupid=0, jobs=1): err= 0: pid=948: Sun Nov 17 18:29:33 2024
00:08:49.168    read: IOPS=830, BW=162MiB/s (170MB/s)(2072MiB/12804msec)
00:08:49.168      slat (usec): min=30, max=28107, avg=10969.74, stdev=7906.08
00:08:49.168      clat (usec): min=327, max=49103, avg=22432.29, stdev=12651.89
00:08:49.168       lat (usec): min=2577, max=49562, avg=33402.04, stdev=12761.87
00:08:49.168      clat percentiles (usec):
00:08:49.168       |  1.00th=[ 2057],  5.00th=[ 5997], 10.00th=[ 9634], 20.00th=[12125],
00:08:49.168       | 30.00th=[13042], 40.00th=[14222], 50.00th=[16188], 60.00th=[27657],
00:08:49.168       | 70.00th=[28967], 80.00th=[32113], 90.00th=[44303], 95.00th=[46400],
00:08:49.168       | 99.00th=[48497], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021],
00:08:49.168       | 99.99th=[49021]
00:08:49.168    write: IOPS=1725, BW=336MiB/s (352MB/s)(2072MiB/6165msec); 0 zone resets
00:08:49.168      slat (usec): min=315, max=104650, avg=32374.19, stdev=21476.27
00:08:49.168      clat (usec): min=181, max=252403, avg=73625.26, stdev=57007.18
00:08:49.168       lat (msec): min=2, max=271, avg=106.00, stdev=64.67
00:08:49.168      clat percentiles (msec):
00:08:49.168       |  1.00th=[    4],  5.00th=[    7], 10.00th=[   10], 20.00th=[   14],
00:08:49.168       | 30.00th=[   23], 40.00th=[   46], 50.00th=[   68], 60.00th=[   80],
00:08:49.168       | 70.00th=[  111], 80.00th=[  136], 90.00th=[  155], 95.00th=[  178],
00:08:49.168       | 99.00th=[  201], 99.50th=[  215], 99.90th=[  247], 99.95th=[  247],
00:08:49.168       | 99.99th=[  253]
00:08:49.168     bw (  KiB/s): min=156830, max=314288, per=47.49%, avg=163417.20, stdev=31431.48, samples=25
00:08:49.168     iops        : min=  786, max= 1576, avg=819.44, stdev=157.62, samples=25
00:08:49.168    lat (usec)   : 250=0.12%, 500=0.04%
00:08:49.168    lat (msec)   : 2=0.26%, 4=2.21%, 10=9.34%, 20=27.74%, 50=31.91%
00:08:49.168    lat (msec)   : 100=12.03%, 250=16.32%, 500=0.02%
00:08:49.168    cpu          : usr=86.03%, sys=1.61%, ctx=913, majf=0, minf=17
00:08:49.168    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:08:49.168       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:08:49.168       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:08:49.168       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:49.168       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:49.168  
00:08:49.168  Run status group 0 (all jobs):
00:08:49.168     READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=2072MiB (2172MB), run=12804-12804msec
00:08:49.168    WRITE: bw=336MiB/s (352MB/s), 336MiB/s-336MiB/s (352MB/s-352MB/s), io=2072MiB (2172MB), run=6165-6165msec
00:08:49.168  
00:08:49.168  Disk stats (read/write):
00:08:49.168    nvme0n1: ios=80/0, merge=0/0, ticks=6/0, in_queue=6, util=35.45%
00:08:49.168  <VM-1-6-7> 
00:08:49.168  nvme-host: (groupid=0, jobs=1): err= 0: pid=947: Sun Nov 17 18:29:34 2024
00:08:49.168    read: IOPS=793, BW=155MiB/s (162MB/s)(2072MiB/13407msec)
00:08:49.168      slat (usec): min=29, max=29132, avg=11682.16, stdev=7968.06
00:08:49.168      clat (usec): min=346, max=63462, avg=23435.90, stdev=13864.59
00:08:49.168       lat (usec): min=5086, max=64314, avg=35118.06, stdev=13823.25
00:08:49.168      clat percentiles (usec):
00:08:49.168       |  1.00th=[ 4228],  5.00th=[ 5669], 10.00th=[ 7439], 20.00th=[10945],
00:08:49.168       | 30.00th=[12780], 40.00th=[15926], 50.00th=[21365], 60.00th=[27395],
00:08:49.168       | 70.00th=[28967], 80.00th=[34341], 90.00th=[42730], 95.00th=[48497],
00:08:49.168       | 99.00th=[57934], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701],
00:08:49.168       | 99.99th=[63701]
00:08:49.168    write: IOPS=1656, BW=323MiB/s (338MB/s)(2072MiB/6423msec); 0 zone resets
00:08:49.168      slat (usec): min=326, max=119132, avg=33235.10, stdev=21863.31
00:08:49.168      clat (msec): min=2, max=276, avg=75.98, stdev=58.67
00:08:49.168       lat (msec): min=3, max=281, avg=109.22, stdev=66.57
00:08:49.168      clat percentiles (msec):
00:08:49.168       |  1.00th=[    4],  5.00th=[    7], 10.00th=[   10], 20.00th=[   14],
00:08:49.168       | 30.00th=[   23], 40.00th=[   49], 50.00th=[   72], 60.00th=[   83],
00:08:49.168       | 70.00th=[  108], 80.00th=[  140], 90.00th=[  159], 95.00th=[  165],
00:08:49.168       | 99.00th=[  215], 99.50th=[  232], 99.90th=[  247], 99.95th=[  247],
00:08:49.168       | 99.99th=[  275]
00:08:49.168     bw (  KiB/s): min=157144, max=157144, per=47.58%, avg=157144.00, stdev= 0.00, samples=27
00:08:49.168     iops        : min=  788, max=  788, avg=788.00, stdev= 0.00, samples=27
00:08:49.168    lat (usec)   : 500=0.09%
00:08:49.168    lat (msec)   : 2=0.34%, 4=0.65%, 10=13.02%, 20=23.81%, 50=30.31%
00:08:49.168    lat (msec)   : 100=14.84%, 250=16.92%, 500=0.02%
00:08:49.168    cpu          : usr=85.88%, sys=1.81%, ctx=1003, majf=0, minf=17
00:08:49.168    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:08:49.168       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:08:49.168       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:08:49.168       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:49.168       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:49.168  
00:08:49.168  Run status group 0 (all jobs):
00:08:49.169     READ: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=2072MiB (2172MB), run=13407-13407msec
00:08:49.169    WRITE: bw=323MiB/s (338MB/s), 323MiB/s-323MiB/s (338MB/s-338MB/s), io=2072MiB (2172MB), run=6423-6423msec
00:08:49.169  
00:08:49.169  Disk stats (read/write):
00:08:49.169    nvme0n1: ios=80/0, merge=0/0, ticks=24/0, in_queue=24, util=27.30%
00:08:49.169  All clients: (groupid=0, jobs=3): err= 0: pid=0: Sun Nov 17 18:29:34 2024
00:08:49.169    read: IOPS=2497, BW=462Mi (484M)(6191MiB/13407msec)
00:08:49.169      slat (usec): min=29, max=35272, avg=10703.71, stdev=7529.11
00:08:49.169      clat (usec): min=327, max=424602, avg=77489.03, stdev=90953.96
00:08:49.169       lat (msec): min=2, max=440, avg=88.19, stdev=90.62
00:08:49.169    write: IOPS=3044, BW=563Mi (590M)(6191MiB/10998msec); 0 zone resets
00:08:49.169      slat (usec): min=277, max=120117, avg=31313.33, stdev=20901.35
00:08:49.169      clat (usec): min=181, max=352682, avg=100944.75, stdev=74660.99
00:08:49.169       lat (msec): min=2, max=394, avg=132.26, stdev=80.05
00:08:49.169     bw (  KiB/s): min=326590, max=866064, per=62.57%, avg=541300.36, stdev=70003.32, samples=71
00:08:49.169     iops        : min= 1634, max= 4412, avg=2892.39, stdev=408.31, samples=71
00:08:49.169    lat (usec)   : 250=0.04%, 500=0.04%
00:08:49.169    lat (msec)   : 2=0.19%, 4=0.91%, 10=7.64%, 20=17.00%, 50=21.81%
00:08:49.169    lat (msec)   : 100=14.85%, 250=31.79%, 500=5.73%
00:08:49.169    cpu          : usr=85.72%, sys=1.82%, ctx=2729, majf=0, minf=68
00:08:49.169    IO depths    : 1=0.0%, 2=0.4%, 4=0.8%, 8=1.1%, 16=2.3%, 32=5.2%, >=64=90.0%
00:08:49.169       submit    : 0=0.0%, 4=1.2%, 8=1.6%, 16=2.1%, 32=4.1%, 64=14.4%, >=64=76.6%
00:08:49.169       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:08:49.169       issued rwts: total=33484,33484,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@75 -- # timing_exit run_vm_cmd
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@77 -- # vm_shutdown_all
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vm_list_all
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/0 ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=397733
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 397733
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0'
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0'
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0'
00:08:49.169  INFO: Shutting down virtual machine /root/vhost_test/vms/0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:49.169  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM0 is shutting down - wait a while to complete'
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete'
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete'
00:08:49.169  INFO: VM0 is shutting down - wait a while to complete
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:49.169    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=397962
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 397962
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:08:49.169  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:49.169   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:49.170    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:49.170    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:49.170    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:49.170    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.170    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:49.170    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:49.170   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:49.170  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:08:49.429  INFO: VM1 is shutting down - wait a while to complete
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 2
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 2
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/2
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/2 ]]
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 2
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:49.429   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:49.430    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=398213
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 398213
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/2'
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/2'
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/2'
00:08:49.430  INFO: Shutting down virtual machine /root/vhost_test/vms/2
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 2 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:49.430    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:49.430    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:49.430    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:49.430    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.430    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:49.430    18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:49.430   18:29:35 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:49.430  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM2 is shutting down - wait a while to complete'
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM2 is shutting down - wait a while to complete'
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM2 is shutting down - wait a while to complete'
00:08:49.689  INFO: VM2 is shutting down - wait a while to complete
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:08:49.689  INFO: Waiting for VMs to shutdown...
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:49.689    18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=397733
00:08:49.689   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 397733
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:49.690    18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=397962
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 397962
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:49.690    18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=398213
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 398213
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:49.690   18:29:36 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:50.258  [2024-11-17 18:29:36.740632] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:08:50.517  [2024-11-17 18:29:37.010304] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:50.777    18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=398213
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 398213
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:50.777   18:29:37 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:50.777  [2024-11-17 18:29:37.326871] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:51.712   18:29:38 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:08:52.649  INFO: All VMs successfully shut down
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@505 -- # return 0
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@79 -- # timing_enter clean_vfio_user
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:52.649    18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # seq 0 2
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:08:52.649   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode0 -t vfiouser -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:08:52.908   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode0
00:08:53.166   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:53.166   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc0
00:08:53.425   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:53.425   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:08:53.425   18:29:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:08:53.682   18:29:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:08:53.940   18:29:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:53.940   18:29:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc1
00:08:54.199   18:29:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:54.199   18:29:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:08:54.199   18:29:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode2 -t vfiouser -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:08:54.458   18:29:40 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode2
00:08:54.458   18:29:41 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:54.458   18:29:41 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@86 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@92 -- # vhost_kill 0
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:08:56.363    18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:08:56.363    18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:08:56.363    18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:56.363    18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@220 -- # local vhost_pid
00:08:56.363    18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # vhost_pid=396347
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 396347) app'
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 396347) app'
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 396347) app'
00:08:56.363  INFO: killing vhost (PID 396347) app
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@224 -- # kill -INT 396347
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:56.363  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 396347
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@228 -- # echo .
00:08:56.363  .
00:08:56.363   18:29:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@229 -- # sleep 1
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i++ ))
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 396347
00:08:57.301  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (396347) - No such process
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@231 -- # break
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@234 -- # kill -0 396347
00:08:57.301  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (396347) - No such process
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@239 -- # kill -0 396347
00:08:57.301  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (396347) - No such process
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@245 -- # is_pid_child 396347
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1668 -- # local pid=396347 _pid
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:08:57.301    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1667 -- # jobs -pr
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1674 -- # return 1
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@261 -- # return 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@93 -- # timing_exit clean_vfio_user
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@94 -- # vhosttestfini
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@1 -- # clean_vfio_user
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@6 -- # vm_kill_all
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@476 -- # local vm
00:08:57.301    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # vm_list_all
00:08:57.301    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:08:57.301    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:08:57.301    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:57.301    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:08:57.301    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 1
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 2
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 2
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/2
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:08:57.301   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@7 -- # vhost_kill 0
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@215 -- # warning 'no vhost pid file found'
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@90 -- # message WARN 'no vhost pid file found'
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=WARN
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'WARN: no vhost pid file found'
00:08:57.302  WARN: no vhost pid file found
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@216 -- # return 0
00:08:57.302  
00:08:57.302  real	1m6.517s
00:08:57.302  user	4m24.277s
00:08:57.302  sys	0m2.450s
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:57.302  ************************************
00:08:57.302  END TEST vfio_user_nvme_fio
00:08:57.302  ************************************
00:08:57.302   18:29:43 vfio_user_qemu -- vfio_user/vfio_user.sh@16 -- # run_test vfio_user_nvme_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:57.302   18:29:43 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:57.302   18:29:43 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:57.302   18:29:43 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:08:57.302  ************************************
00:08:57.302  START TEST vfio_user_nvme_restart_vm
00:08:57.302  ************************************
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:57.302  * Looking for test storage...
00:08:57.302  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1693 -- # lcov --version
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@345 -- # : 1
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=1
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 1
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=2
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 2
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # return 0
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:57.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.302  		--rc genhtml_branch_coverage=1
00:08:57.302  		--rc genhtml_function_coverage=1
00:08:57.302  		--rc genhtml_legend=1
00:08:57.302  		--rc geninfo_all_blocks=1
00:08:57.302  		--rc geninfo_unexecuted_blocks=1
00:08:57.302  		
00:08:57.302  		'
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:57.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.302  		--rc genhtml_branch_coverage=1
00:08:57.302  		--rc genhtml_function_coverage=1
00:08:57.302  		--rc genhtml_legend=1
00:08:57.302  		--rc geninfo_all_blocks=1
00:08:57.302  		--rc geninfo_unexecuted_blocks=1
00:08:57.302  		
00:08:57.302  		'
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:57.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.302  		--rc genhtml_branch_coverage=1
00:08:57.302  		--rc genhtml_function_coverage=1
00:08:57.302  		--rc genhtml_legend=1
00:08:57.302  		--rc geninfo_all_blocks=1
00:08:57.302  		--rc geninfo_unexecuted_blocks=1
00:08:57.302  		
00:08:57.302  		'
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:57.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.302  		--rc genhtml_branch_coverage=1
00:08:57.302  		--rc genhtml_function_coverage=1
00:08:57.302  		--rc genhtml_legend=1
00:08:57.302  		--rc geninfo_all_blocks=1
00:08:57.302  		--rc geninfo_unexecuted_blocks=1
00:08:57.302  		
00:08:57.302  		'
00:08:57.302   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:08:57.302    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@6 -- # : false
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:08:57.302      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:08:57.302       18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:57.302      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:08:57.302     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:08:57.303     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:08:57.303     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:08:57.303      18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:08:57.303       18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:08:57.303        18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:08:57.303        18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:08:57.303        18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:08:57.303        18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:08:57.303       18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # get_nvme_bdfs
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:57.563     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:57.563     18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # get_vhost_dir 0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@16 -- # trap clean_vfio_user EXIT
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@18 -- # vhosttestinit
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@20 -- # vfio_user_run 0
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@11 -- # local vhost_name=0
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # get_vhost_dir 0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:57.563    18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@22 -- # nvmfpid=408550
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@23 -- # echo 408550
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@25 -- # echo 'Process pid: 408550'
00:08:57.563  Process pid: 408550
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:08:57.563  waiting for app to run...
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@27 -- # waitforlisten 408550 /root/vhost_test/vhost/0/rpc.sock
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 408550 ']'
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:08:57.563  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:57.563   18:29:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:57.563  [2024-11-17 18:29:44.029131] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:08:57.563  [2024-11-17 18:29:44.029274] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408550 ]
00:08:57.563  EAL: No free 2048 kB hugepages reported on node 1
00:08:57.822  [2024-11-17 18:29:44.288117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:57.822  [2024-11-17 18:29:44.318270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:57.822  [2024-11-17 18:29:44.318349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:57.822  [2024-11-17 18:29:44.318356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:57.822  [2024-11-17 18:29:44.318407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:58.389   18:29:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:58.389   18:29:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:08:58.389   18:29:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:08:58.648   18:29:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:08:58.648   18:29:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:58.648   18:29:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:58.648   18:29:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@22 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:08:58.648   18:29:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@23 -- # rm -rf /root/vhost_test/vms/1/muser
00:08:58.648   18:29:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@24 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:08:58.648   18:29:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@26 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:09:01.937  Nvme0n1
00:09:01.937   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:09:01.937   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Nvme0n1
00:09:02.196   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@31 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:02.456  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:09:02.456  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:02.456  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:02.456  INFO: TASK MASK: 6-7
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:02.456  INFO: NUMA NODE: 0
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:02.456   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:02.457  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:02.457  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:09:02.457    18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@32 -- # vm_run 1
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:02.457  INFO: running /root/vhost_test/vms/1/run.sh
00:09:02.457   18:29:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:02.457  Running VM in /root/vhost_test/vms/1
00:09:03.026  Waiting for QEMU pid file
00:09:03.026  [2024-11-17 18:29:49.568175] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:03.962  === qemu.log ===
00:09:03.962  === qemu.log ===
00:09:03.962   18:29:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@33 -- # vm_wait_for_boot 60 1
00:09:03.962   18:29:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:03.962   18:29:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:03.962   18:29:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:03.962   18:29:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:03.962   18:29:50 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:03.962  INFO: Waiting for VMs to boot
00:09:03.962  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:09:18.843  [2024-11-17 18:30:03.516415] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:18.843  [2024-11-17 18:30:03.525446] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:18.843  [2024-11-17 18:30:03.529473] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:26.969  
00:09:26.969  INFO: VM1 ready
00:09:26.969  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:27.228  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:28.163  INFO: all VMs ready
00:09:28.163   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:09:28.164   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@35 -- # vm_exec 1 lsblk
00:09:28.164   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:28.164   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.164   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:28.164   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:28.164   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:28.164    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:28.164    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:28.164    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.164    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:28.164    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:28.164    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:28.164   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:09:28.164  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:28.423  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:09:28.423  sda       8:0    0     5G  0 disk 
00:09:28.423  ├─sda1    8:1    0     1M  0 part 
00:09:28.423  ├─sda2    8:2    0  1000M  0 part /boot
00:09:28.423  ├─sda3    8:3    0   100M  0 part /boot/efi
00:09:28.423  ├─sda4    8:4    0     4M  0 part 
00:09:28.423  └─sda5    8:5    0   3.9G  0 part /home
00:09:28.423                                    /
00:09:28.423  zram0   252:0    0   946M  0 disk [SWAP]
00:09:28.423  nvme0n1 259:1    0 931.5G  0 disk 
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@37 -- # vm_shutdown_all
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:09:28.423    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:09:28.423    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:28.423    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:28.423    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:28.423    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:28.423    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:28.423    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=409448
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 409448
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:28.423   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:09:28.424  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:28.424    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:28.424    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:28.424    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.424    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:28.424    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:28.424    18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:28.424   18:30:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:28.424  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:09:28.683  INFO: VM1 is shutting down - wait a while to complete
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:09:28.683  INFO: Waiting for VMs to shutdown...
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:28.683    18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=409448
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 409448
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:28.683   18:30:15 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:29.638    18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=409448
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 409448
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:29.638   18:30:16 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:29.897  [2024-11-17 18:30:16.305835] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:09:30.833   18:30:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:31.771  INFO: All VMs successfully shut down
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@40 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:31.771  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:09:31.771  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:31.771  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:31.771  INFO: TASK MASK: 6-7
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:31.771  INFO: NUMA NODE: 0
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:09:31.771   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:31.772  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:31.772  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:09:31.772    18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@41 -- # vm_run 1
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:31.772  INFO: running /root/vhost_test/vms/1/run.sh
00:09:31.772   18:30:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:31.772  Running VM in /root/vhost_test/vms/1
00:09:32.031  Waiting for QEMU pid file
00:09:32.290  [2024-11-17 18:30:18.774478] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:33.225  === qemu.log ===
00:09:33.225  === qemu.log ===
00:09:33.225   18:30:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@42 -- # vm_wait_for_boot 60 1
00:09:33.225   18:30:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:33.225   18:30:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:33.225   18:30:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:33.225   18:30:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:33.225   18:30:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:33.225  INFO: Waiting for VMs to boot
00:09:33.225  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:09:48.106  [2024-11-17 18:30:32.147731] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:48.106  [2024-11-17 18:30:32.156765] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:48.106  [2024-11-17 18:30:32.160780] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:54.673  
00:09:54.673  INFO: VM1 ready
00:09:54.932  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:54.932  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:55.870  INFO: all VMs ready
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@44 -- # vm_exec 1 lsblk
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:55.870    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:55.870    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:55.870    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:55.870    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:55.870    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:55.870    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:55.870   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:09:55.870  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:56.129  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:09:56.129  sda       8:0    0     5G  0 disk 
00:09:56.129  ├─sda1    8:1    0     1M  0 part 
00:09:56.129  ├─sda2    8:2    0  1000M  0 part /boot
00:09:56.129  ├─sda3    8:3    0   100M  0 part /boot/efi
00:09:56.129  ├─sda4    8:4    0     4M  0 part 
00:09:56.129  └─sda5    8:5    0   3.9G  0 part /home
00:09:56.129                                    /
00:09:56.129  zram0   252:0    0   946M  0 disk [SWAP]
00:09:56.129  nvme0n1 259:1    0 931.5G  0 disk 
00:09:56.129   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_ns nqn.2019-07.io.spdk:cnode1 1
00:09:56.388   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:09:56.647   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@53 -- # vm_exec 1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:09:56.647   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:56.647   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.647   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:56.647   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:56.647   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:56.647    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:56.647    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:56.647    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.647    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:56.647    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:56.647    18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:56.647   18:30:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:09:56.647  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:56.906   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@55 -- # vm_shutdown_all
00:09:56.906   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:09:56.906   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:09:56.906    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:09:56.906    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:56.906    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:56.906    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:56.906    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:56.906    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:56.906   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:09:56.906   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:09:56.906   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:09:56.906   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:56.907    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=415156
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 415156
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:09:56.907  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:56.907    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:56.907    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:56.907    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.907    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:56.907    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:56.907    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:56.907   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:56.907  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:09:57.167  INFO: VM1 is shutting down - wait a while to complete
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:09:57.167  INFO: Waiting for VMs to shutdown...
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:57.167    18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=415156
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 415156
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:57.167   18:30:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:58.106    18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=415156
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 415156
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:58.106   18:30:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:09:59.045   18:30:45 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:59.982  INFO: All VMs successfully shut down
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:59.982   18:30:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@58 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@60 -- # vhosttestfini
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@1 -- # clean_vfio_user
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@6 -- # vm_kill_all
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@476 -- # local vm
00:10:01.888    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # vm_list_all
00:10:01.888    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:10:01.888    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:10:01.888    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:10:01.888    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:10:01.888    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@478 -- # vm_kill 1
00:10:01.888   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@446 -- # return 0
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@7 -- # vhost_kill 0
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:10:01.889    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:10:01.889    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:10:01.889    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:01.889    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:10:01.889    18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # vhost_pid=408550
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 408550) app'
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 408550) app'
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 408550) app'
00:10:01.889  INFO: killing vhost (PID 408550) app
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@224 -- # kill -INT 408550
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:01.889  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 408550
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@228 -- # echo .
00:10:01.889  .
00:10:01.889   18:30:48 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:10:02.826   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 408550
00:10:02.827  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (408550) - No such process
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@231 -- # break
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@234 -- # kill -0 408550
00:10:02.827  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (408550) - No such process
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@239 -- # kill -0 408550
00:10:02.827  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (408550) - No such process
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@245 -- # is_pid_child 408550
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1668 -- # local pid=408550 _pid
00:10:02.827    18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1667 -- # jobs -pr
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1674 -- # return 1
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:02.827   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:03.087   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:10:03.087   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@261 -- # return 0
00:10:03.087  
00:10:03.087  real	1m5.691s
00:10:03.087  user	4m17.195s
00:10:03.087  sys	0m1.461s
00:10:03.087   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:03.087   18:30:49 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:03.087  ************************************
00:10:03.087  END TEST vfio_user_nvme_restart_vm
00:10:03.087  ************************************
00:10:03.087   18:30:49 vfio_user_qemu -- vfio_user/vfio_user.sh@17 -- # run_test vfio_user_virtio_blk_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:10:03.087   18:30:49 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:03.087   18:30:49 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:03.087   18:30:49 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:10:03.087  ************************************
00:10:03.087  START TEST vfio_user_virtio_blk_restart_vm
00:10:03.087  ************************************
00:10:03.087   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:10:03.087  * Looking for test storage...
00:10:03.087  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:10:03.087    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:03.087     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1693 -- # lcov --version
00:10:03.087     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:03.087    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:03.087    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:03.087    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@345 -- # : 1
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=1
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 1
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=2
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 2
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # return 0
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:03.088  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.088  		--rc genhtml_branch_coverage=1
00:10:03.088  		--rc genhtml_function_coverage=1
00:10:03.088  		--rc genhtml_legend=1
00:10:03.088  		--rc geninfo_all_blocks=1
00:10:03.088  		--rc geninfo_unexecuted_blocks=1
00:10:03.088  		
00:10:03.088  		'
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:03.088  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.088  		--rc genhtml_branch_coverage=1
00:10:03.088  		--rc genhtml_function_coverage=1
00:10:03.088  		--rc genhtml_legend=1
00:10:03.088  		--rc geninfo_all_blocks=1
00:10:03.088  		--rc geninfo_unexecuted_blocks=1
00:10:03.088  		
00:10:03.088  		'
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:03.088  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.088  		--rc genhtml_branch_coverage=1
00:10:03.088  		--rc genhtml_function_coverage=1
00:10:03.088  		--rc genhtml_legend=1
00:10:03.088  		--rc geninfo_all_blocks=1
00:10:03.088  		--rc geninfo_unexecuted_blocks=1
00:10:03.088  		
00:10:03.088  		'
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:03.088  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.088  		--rc genhtml_branch_coverage=1
00:10:03.088  		--rc genhtml_function_coverage=1
00:10:03.088  		--rc genhtml_legend=1
00:10:03.088  		--rc geninfo_all_blocks=1
00:10:03.088  		--rc geninfo_unexecuted_blocks=1
00:10:03.088  		
00:10:03.088  		'
00:10:03.088   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:10:03.088    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@6 -- # : false
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:10:03.088       18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:10:03.088     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:10:03.088      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:10:03.089     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:10:03.089      18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:10:03.089       18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:10:03.089        18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:10:03.089        18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:10:03.089        18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:10:03.089        18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:10:03.089       18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:03.089   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:10:03.089   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:10:03.089   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:10:03.089    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:10:03.089     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:10:03.089     18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_blk
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_blk != virtio_blk ]]
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:03.349    18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@17 -- # vfupid=420872
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@18 -- # echo 420872
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 420872'
00:10:03.349  Process pid: 420872
00:10:03.349   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:10:03.349  waiting for app to run...
00:10:03.350   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@22 -- # waitforlisten 420872 /root/vhost_test/vhost/0/rpc.sock
00:10:03.350   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 420872 ']'
00:10:03.350   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:10:03.350   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:03.350   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:10:03.350  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:10:03.350   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:03.350   18:30:49 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:03.350  [2024-11-17 18:30:49.800271] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:10:03.350  [2024-11-17 18:30:49.800395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420872 ]
00:10:03.350  EAL: No free 2048 kB hugepages reported on node 1
00:10:03.609  [2024-11-17 18:30:50.125552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:03.609  [2024-11-17 18:30:50.171040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:03.609  [2024-11-17 18:30:50.171086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:03.609  [2024-11-17 18:30:50.171079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:03.609  [2024-11-17 18:30:50.171133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:10:04.187   18:30:50 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:10:07.474  Nvme0n1
00:10:07.474   18:30:53 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:10:07.474   18:30:53 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:10:07.474   18:30:53 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:10:07.474   18:30:53 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:10:07.474   18:30:53 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_blk_endpoint virtio.1 --bdev-name Nvme0n1 --num-queues=2 --qsize=512 --packed-ring
00:10:07.733   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:10:07.733   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:10:07.733   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:07.733  INFO: Creating new VM in /root/vhost_test/vms/1
00:10:07.733  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:10:07.733  INFO: TASK MASK: 6-7
00:10:07.733   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:10:07.733   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:10:07.733   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:10:07.734  INFO: NUMA NODE: 0
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:07.734  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:10:07.734  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:10:07.734    18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:10:07.734  INFO: running /root/vhost_test/vms/1/run.sh
00:10:07.734   18:30:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:10:07.734  Running VM in /root/vhost_test/vms/1
00:10:08.301  [2024-11-17 18:30:54.618810] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:10:08.301  Waiting for QEMU pid file
00:10:09.236  === qemu.log ===
00:10:09.236  === qemu.log ===
00:10:09.236   18:30:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:10:09.236   18:30:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:10:09.236   18:30:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:10:09.236   18:30:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:10:09.236   18:30:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:10:09.236   18:30:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:09.236  INFO: Waiting for VMs to boot
00:10:09.236  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:10:35.785  
00:10:35.785  INFO: VM1 ready
00:10:35.785  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.785  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.785  INFO: all VMs ready
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:10:35.785  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@993 -- # shift 1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:10:35.785  INFO: Starting fio server on VM1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:10:35.785  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:35.785  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_blk 1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:10:35.785   18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:35.785     18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:35.785     18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:35.785     18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.785     18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.785     18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:35.785     18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:35.785    18:31:20 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:10:35.785  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.785   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:10:35.785   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:10:35.785   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=vda
00:10:35.785    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s vda
00:10:35.785   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/vda'
00:10:35.785   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/vda
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1053 -- # local arg
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # local vms
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1057 -- # local out=
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1058 -- # local vm
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/vda
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/vda@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:10:35.786  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1121 -- # false
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:10:35.786  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:35.786  [global]
00:10:35.786  blocksize_range=4k-512k
00:10:35.786  iodepth=512
00:10:35.786  iodepth_batch=128
00:10:35.786  iodepth_low=256
00:10:35.786  ioengine=libaio
00:10:35.786  size=1G
00:10:35.786  io_size=4G
00:10:35.786  filename=/dev/vda
00:10:35.786  group_reporting
00:10:35.786  thread
00:10:35.786  numjobs=1
00:10:35.786  direct=1
00:10:35.786  rw=randwrite
00:10:35.786  do_verify=1
00:10:35.786  verify=md5
00:10:35.786  verify_backlog=1024
00:10:35.786  fsync_on_close=1
00:10:35.786  verify_state_save=0
00:10:35.786  [nvme-host]
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1127 -- # true
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:10:35.786    18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1131 -- # true
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1147 -- # true
00:10:35.786   18:31:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:10:47.998   18:31:32 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:10:47.998   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:10:47.998   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:10:47.998   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:10:47.998  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:10:47.998  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:10:47.998  <VM-1-6-7> Starting 1 thread
00:10:47.998  <VM-1-6-7> 
00:10:47.998  nvme-host: (groupid=0, jobs=1): err= 0: pid=951: Sun Nov 17 18:31:32 2024
00:10:47.998    read: IOPS=1307, BW=219MiB/s (230MB/s)(2048MiB/9340msec)
00:10:47.999      slat (usec): min=37, max=15245, avg=2222.93, stdev=3244.75
00:10:47.999      clat (msec): min=7, max=374, avg=137.26, stdev=78.12
00:10:47.999       lat (msec): min=7, max=376, avg=139.48, stdev=77.73
00:10:47.999      clat percentiles (msec):
00:10:47.999       |  1.00th=[   11],  5.00th=[   17], 10.00th=[   44], 20.00th=[   73],
00:10:47.999       | 30.00th=[   88], 40.00th=[  109], 50.00th=[  127], 60.00th=[  148],
00:10:47.999       | 70.00th=[  171], 80.00th=[  201], 90.00th=[  247], 95.00th=[  288],
00:10:47.999       | 99.00th=[  347], 99.50th=[  359], 99.90th=[  372], 99.95th=[  372],
00:10:47.999       | 99.99th=[  376]
00:10:47.999    write: IOPS=1388, BW=233MiB/s (244MB/s)(2048MiB/8790msec); 0 zone resets
00:10:47.999      slat (usec): min=241, max=92849, avg=21371.31, stdev=15509.57
00:10:47.999      clat (msec): min=6, max=292, avg=118.84, stdev=64.95
00:10:47.999       lat (msec): min=7, max=340, avg=140.21, stdev=69.07
00:10:47.999      clat percentiles (msec):
00:10:47.999       |  1.00th=[   10],  5.00th=[   23], 10.00th=[   32], 20.00th=[   65],
00:10:47.999       | 30.00th=[   82], 40.00th=[   94], 50.00th=[  110], 60.00th=[  128],
00:10:47.999       | 70.00th=[  150], 80.00th=[  174], 90.00th=[  211], 95.00th=[  239],
00:10:47.999       | 99.00th=[  266], 99.50th=[  292], 99.90th=[  292], 99.95th=[  292],
00:10:47.999       | 99.99th=[  292]
00:10:47.999     bw (  KiB/s): min=100136, max=472048, per=100.00%, avg=240081.88, stdev=105583.77, samples=17
00:10:47.999     iops        : min=  512, max= 2048, avg=1380.71, stdev=534.86, samples=17
00:10:47.999    lat (msec)   : 10=1.06%, 20=4.04%, 50=8.01%, 100=26.85%, 250=53.06%
00:10:47.999    lat (msec)   : 500=6.99%
00:10:47.999    cpu          : usr=94.71%, sys=1.78%, ctx=496, majf=0, minf=34
00:10:47.999    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:10:47.999       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:10:47.999       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:10:47.999       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:47.999       latency   : target=0, window=0, percentile=100.00%, depth=512
00:10:47.999  
00:10:47.999  Run status group 0 (all jobs):
00:10:47.999     READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=2048MiB (2147MB), run=9340-9340msec
00:10:47.999    WRITE: bw=233MiB/s (244MB/s), 233MiB/s-233MiB/s (244MB/s-244MB/s), io=2048MiB (2147MB), run=8790-8790msec
00:10:47.999  
00:10:47.999  Disk stats (read/write):
00:10:47.999    vda: ios=12115/12141, merge=51/72, ticks=136769/103641, in_queue=240411, util=29.00%
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:10:47.999  INFO: Shutting down virtual machine...
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=421745
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 421745
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:10:47.999  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:47.999    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:47.999  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:10:47.999  INFO: VM1 is shutting down - wait a while to complete
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:10:47.999  INFO: Waiting for VMs to shutdown...
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:47.999   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:48.000    18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=421745
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 421745
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:48.000   18:31:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:48.259    18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=421745
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 421745
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:48.259   18:31:34 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:10:49.195   18:31:35 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:10:50.132  INFO: All VMs successfully shut down
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:50.132  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:10:50.132  INFO: Creating new VM in /root/vhost_test/vms/1
00:10:50.132  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:10:50.132  INFO: TASK MASK: 6-7
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:10:50.132  INFO: NUMA NODE: 0
00:10:50.132   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:50.133  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:10:50.133   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:10:50.392   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:10:50.393  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:10:50.393    18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:10:50.393  INFO: running /root/vhost_test/vms/1/run.sh
00:10:50.393   18:31:36 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:10:50.393  Running VM in /root/vhost_test/vms/1
00:10:50.652  [2024-11-17 18:31:37.117390] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:10:50.653  Waiting for QEMU pid file
00:10:52.030  === qemu.log ===
00:10:52.030  === qemu.log ===
00:10:52.030   18:31:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:10:52.030   18:31:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:10:52.030   18:31:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:10:52.030   18:31:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:10:52.030   18:31:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:10:52.030   18:31:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:52.030  INFO: Waiting for VMs to boot
00:10:52.030  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:18.581  
00:11:18.581  INFO: VM1 ready
00:11:18.581  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:18.581  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:18.581  INFO: all VMs ready
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_blk 1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:11:18.581     18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:18.581     18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:18.581     18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.581     18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.581     18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.581     18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:11:18.581  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=vda
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[ vda != \v\d\a ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:11:18.581  INFO: Shutting down virtual machine...
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:18.581    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=429340
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 429340
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:11:18.581  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:18.581   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.582   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.582   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:18.582   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:11:18.582    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:18.582    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:18.582    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.582    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.582    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.582    18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:18.582   18:32:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:18.582  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:18.582  Connection to 127.0.0.1 closed by remote host.
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # true
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:11:18.582  INFO: VM1 is shutting down - wait a while to complete
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:11:18.582  INFO: Waiting for VMs to shutdown...
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:18.582    18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=429340
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 429340
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:18.582   18:32:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:18.582    18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=429340
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 429340
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:18.582   18:32:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:11:18.841   18:32:05 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:11:19.778  INFO: All VMs successfully shut down
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:11:19.778   18:32:06 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:11:20.037  [2024-11-17 18:32:06.429706] vfu_virtio_blk.c: 384:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:11:21.415    18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:11:21.415    18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:11:21.415    18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:21.415    18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:11:21.415    18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # vhost_pid=420872
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 420872) app'
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 420872) app'
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 420872) app'
00:11:21.415  INFO: killing vhost (PID 420872) app
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@224 -- # kill -INT 420872
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:21.415  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 420872
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:11:21.415  .
00:11:21.415   18:32:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 420872
00:11:22.359  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (420872) - No such process
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@231 -- # break
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@234 -- # kill -0 420872
00:11:22.359  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (420872) - No such process
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@239 -- # kill -0 420872
00:11:22.359  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (420872) - No such process
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@245 -- # is_pid_child 420872
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1668 -- # local pid=420872 _pid
00:11:22.359    18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1667 -- # jobs -pr
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1674 -- # return 1
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@261 -- # return 0
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:11:22.359  
00:11:22.359  real	1m19.376s
00:11:22.359  user	5m12.735s
00:11:22.359  sys	0m1.673s
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:22.359  ************************************
00:11:22.359  END TEST vfio_user_virtio_blk_restart_vm
00:11:22.359  ************************************
00:11:22.359   18:32:08 vfio_user_qemu -- vfio_user/vfio_user.sh@18 -- # run_test vfio_user_virtio_scsi_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:11:22.359   18:32:08 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:22.359   18:32:08 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:22.359   18:32:08 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:11:22.359  ************************************
00:11:22.359  START TEST vfio_user_virtio_scsi_restart_vm
00:11:22.359  ************************************
00:11:22.359   18:32:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:11:22.620  * Looking for test storage...
00:11:22.620  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:11:22.620    18:32:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:11:22.620     18:32:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1693 -- # lcov --version
00:11:22.620     18:32:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@345 -- # : 1
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=1
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 1
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=2
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 2
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # return 0
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:11:22.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.620  		--rc genhtml_branch_coverage=1
00:11:22.620  		--rc genhtml_function_coverage=1
00:11:22.620  		--rc genhtml_legend=1
00:11:22.620  		--rc geninfo_all_blocks=1
00:11:22.620  		--rc geninfo_unexecuted_blocks=1
00:11:22.620  		
00:11:22.620  		'
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:11:22.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.620  		--rc genhtml_branch_coverage=1
00:11:22.620  		--rc genhtml_function_coverage=1
00:11:22.620  		--rc genhtml_legend=1
00:11:22.620  		--rc geninfo_all_blocks=1
00:11:22.620  		--rc geninfo_unexecuted_blocks=1
00:11:22.620  		
00:11:22.620  		'
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:11:22.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.620  		--rc genhtml_branch_coverage=1
00:11:22.620  		--rc genhtml_function_coverage=1
00:11:22.620  		--rc genhtml_legend=1
00:11:22.620  		--rc geninfo_all_blocks=1
00:11:22.620  		--rc geninfo_unexecuted_blocks=1
00:11:22.620  		
00:11:22.620  		'
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:11:22.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:22.620  		--rc genhtml_branch_coverage=1
00:11:22.620  		--rc genhtml_function_coverage=1
00:11:22.620  		--rc genhtml_legend=1
00:11:22.620  		--rc geninfo_all_blocks=1
00:11:22.620  		--rc geninfo_unexecuted_blocks=1
00:11:22.620  		
00:11:22.620  		'
00:11:22.620   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:11:22.620    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@6 -- # : false
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:11:22.620      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:11:22.620     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:11:22.621       18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:11:22.621      18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:11:22.621       18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:11:22.621        18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:11:22.621        18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:11:22.621        18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:11:22.621        18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:11:22.621       18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:22.621   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:11:22.621   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:22.621   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:11:22.621     18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:22.621    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:22.621   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:11:22.621   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_scsi
00:11:22.621   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_blk ]]
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_scsi ]]
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:11:22.622    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:11:22.622    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:11:22.622    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:22.622    18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@17 -- # vfupid=435238
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@18 -- # echo 435238
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 435238'
00:11:22.622  Process pid: 435238
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:11:22.622  waiting for app to run...
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@22 -- # waitforlisten 435238 /root/vhost_test/vhost/0/rpc.sock
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 435238 ']'
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:11:22.622  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:22.622   18:32:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:22.886  [2024-11-17 18:32:09.241421] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:11:22.886  [2024-11-17 18:32:09.241550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435238 ]
00:11:22.886  EAL: No free 2048 kB hugepages reported on node 1
00:11:23.145  [2024-11-17 18:32:09.486721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:23.145  [2024-11-17 18:32:09.517374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:23.145  [2024-11-17 18:32:09.517454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:23.145  [2024-11-17 18:32:09.517459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:23.145  [2024-11-17 18:32:09.517509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:11:23.713   18:32:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:11:27.256  Nvme0n1
00:11:27.256   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:11:27.256   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:11:27.256   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:11:27.256   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\b\l\k ]]
00:11:27.256   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@48 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:11:27.256   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_scsi_endpoint virtio.1 --num-io-queues=2 --qsize=512 --packed-ring
00:11:27.256   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_scsi_add_target virtio.1 --scsi-target-num=0 --bdev-name Nvme0n1
00:11:27.256  [2024-11-17 18:32:13.821403] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: virtio.1: added SCSI target 0 using bdev 'Nvme0n1'
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:27.517  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:11:27.517  INFO: Creating new VM in /root/vhost_test/vms/1
00:11:27.517  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:11:27.517  INFO: TASK MASK: 6-7
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:11:27.517  INFO: NUMA NODE: 0
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:27.517  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:27.517   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:11:27.518  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:11:27.518    18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:11:27.518  INFO: running /root/vhost_test/vms/1/run.sh
00:11:27.518   18:32:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:11:27.518  Running VM in /root/vhost_test/vms/1
00:11:27.777  [2024-11-17 18:32:14.282757] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:11:28.035  Waiting for QEMU pid file
00:11:28.969  === qemu.log ===
00:11:28.969  === qemu.log ===
00:11:28.969   18:32:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:11:28.969   18:32:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:11:28.969   18:32:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:11:28.969   18:32:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:11:28.969   18:32:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:11:28.969   18:32:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:28.969  INFO: Waiting for VMs to boot
00:11:28.969  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:43.849  [2024-11-17 18:32:29.019017] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:05.781  
00:12:05.781  INFO: VM1 ready
00:12:05.781  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:05.781  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:05.781  INFO: all VMs ready
00:12:05.781   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:12:05.782  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@993 -- # shift 1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:12:05.782  INFO: Starting fio server on VM1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:12:05.782  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:05.782    18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:05.782   18:32:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:12:05.782  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_scsi 1
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:12:05.782  	for entry in /sys/block/sd*; do
00:12:05.782  		disk_type="$(cat $entry/device/vendor)";
00:12:05.782  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:05.782  			fname=$(basename $entry);
00:12:05.782  			echo -n " $fname";
00:12:05.782  		fi;
00:12:05.782  	done'
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:12:05.782  	for entry in /sys/block/sd*; do
00:12:05.782  		disk_type="$(cat $entry/device/vendor)";
00:12:05.782  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:05.782  			fname=$(basename $entry);
00:12:05.782  			echo -n " $fname";
00:12:05.782  		fi;
00:12:05.782  	done'
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:05.782     18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:05.782     18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:05.782     18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.782     18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.782     18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:05.782     18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:12:05.782  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=' sdb'
00:12:05.782    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s sdb
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/sdb'
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/sdb
00:12:05.782   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1053 -- # local arg
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # local vms
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1057 -- # local out=
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1058 -- # local vm
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:12:05.783    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/sdb
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/sdb@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:05.783    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:05.783    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:05.783    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.783    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:05.783    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:05.783    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:05.783   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:12:05.783  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1121 -- # false
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:06.042    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:06.042    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:06.042    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:06.042    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:06.042    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:06.042    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:06.042   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:12:06.042  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:06.300  [global]
00:12:06.300  blocksize_range=4k-512k
00:12:06.300  iodepth=512
00:12:06.300  iodepth_batch=128
00:12:06.300  iodepth_low=256
00:12:06.300  ioengine=libaio
00:12:06.300  size=1G
00:12:06.300  io_size=4G
00:12:06.300  filename=/dev/sdb
00:12:06.300  group_reporting
00:12:06.300  thread
00:12:06.300  numjobs=1
00:12:06.300  direct=1
00:12:06.300  rw=randwrite
00:12:06.300  do_verify=1
00:12:06.300  verify=md5
00:12:06.300  verify_backlog=1024
00:12:06.300  fsync_on_close=1
00:12:06.300  verify_state_save=0
00:12:06.300  [nvme-host]
00:12:06.300   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1127 -- # true
00:12:06.300    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:12:06.300    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:12:06.300    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:06.301    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:06.301    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:12:06.301    18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:12:06.301   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:12:06.301   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1131 -- # true
00:12:06.301   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1147 -- # true
00:12:06.301   18:32:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:12:07.235  [2024-11-17 18:32:53.777973] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:12.511  [2024-11-17 18:32:58.311015] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:12.511  [2024-11-17 18:32:58.622874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:16.706  [2024-11-17 18:33:02.949261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:16.706  [2024-11-17 18:33:03.217394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:16.706   18:33:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:12:18.085  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:12:18.085  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:12:18.085  <VM-1-6-7> Starting 1 thread
00:12:18.085  <VM-1-6-7> 
00:12:18.085  nvme-host: (groupid=0, jobs=1): err= 0: pid=966: Sun Nov 17 18:33:03 2024
00:12:18.085    read: IOPS=1333, BW=224MiB/s (235MB/s)(2048MiB/9155msec)
00:12:18.085      slat (usec): min=45, max=35205, avg=3071.29, stdev=5396.54
00:12:18.085      clat (msec): min=6, max=341, avg=134.04, stdev=71.22
00:12:18.085       lat (msec): min=7, max=343, avg=137.11, stdev=70.95
00:12:18.085      clat percentiles (msec):
00:12:18.085       |  1.00th=[   12],  5.00th=[   21], 10.00th=[   46], 20.00th=[   75],
00:12:18.085       | 30.00th=[   91], 40.00th=[  110], 50.00th=[  127], 60.00th=[  144],
00:12:18.085       | 70.00th=[  167], 80.00th=[  197], 90.00th=[  234], 95.00th=[  266],
00:12:18.085       | 99.00th=[  317], 99.50th=[  326], 99.90th=[  334], 99.95th=[  338],
00:12:18.085       | 99.99th=[  342]
00:12:18.085    write: IOPS=1426, BW=239MiB/s (251MB/s)(2048MiB/8556msec); 0 zone resets
00:12:18.085      slat (usec): min=346, max=73998, avg=21129.15, stdev=14316.24
00:12:18.085      clat (msec): min=7, max=294, avg=117.69, stdev=64.09
00:12:18.085       lat (msec): min=8, max=325, avg=138.82, stdev=66.80
00:12:18.085      clat percentiles (msec):
00:12:18.085       |  1.00th=[    8],  5.00th=[   18], 10.00th=[   30], 20.00th=[   66],
00:12:18.085       | 30.00th=[   79], 40.00th=[   95], 50.00th=[  112], 60.00th=[  127],
00:12:18.085       | 70.00th=[  150], 80.00th=[  171], 90.00th=[  209], 95.00th=[  239],
00:12:18.085       | 99.00th=[  284], 99.50th=[  296], 99.90th=[  296], 99.95th=[  296],
00:12:18.085       | 99.99th=[  296]
00:12:18.085     bw (  KiB/s): min=21032, max=386216, per=95.06%, avg=232990.11, stdev=95541.50, samples=18
00:12:18.085     iops        : min=  102, max= 2048, avg=1356.22, stdev=629.24, samples=18
00:12:18.085    lat (msec)   : 10=0.92%, 20=4.13%, 50=7.99%, 100=25.61%, 250=56.23%
00:12:18.085    lat (msec)   : 500=5.13%
00:12:18.085    cpu          : usr=93.39%, sys=2.17%, ctx=562, majf=0, minf=34
00:12:18.085    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:12:18.085       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:12:18.085       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:12:18.085       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:18.085       latency   : target=0, window=0, percentile=100.00%, depth=512
00:12:18.085  
00:12:18.085  Run status group 0 (all jobs):
00:12:18.085     READ: bw=224MiB/s (235MB/s), 224MiB/s-224MiB/s (235MB/s-235MB/s), io=2048MiB (2147MB), run=9155-9155msec
00:12:18.085    WRITE: bw=239MiB/s (251MB/s), 239MiB/s-239MiB/s (251MB/s-251MB/s), io=2048MiB (2147MB), run=8556-8556msec
00:12:18.085  
00:12:18.085  Disk stats (read/write):
00:12:18.085    sdb: ios=11987/12182, merge=63/87, ticks=139511/101626, in_queue=241138, util=30.19%
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:12:18.085  INFO: Shutting down virtual machine...
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=436127
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 436127
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:18.085  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:18.085    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:18.085  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:18.085  INFO: VM1 is shutting down - wait a while to complete
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:18.085   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:18.086  INFO: Waiting for VMs to shutdown...
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:18.086    18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=436127
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 436127
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:18.086   18:33:04 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:19.023    18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=436127
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 436127
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:19.023   18:33:05 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:20.401   18:33:06 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:12:21.339  INFO: All VMs successfully shut down
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:21.339  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:12:21.339  INFO: Creating new VM in /root/vhost_test/vms/1
00:12:21.339  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:12:21.339  INFO: TASK MASK: 6-7
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:12:21.339  INFO: NUMA NODE: 0
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:21.339  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:12:21.339  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:12:21.339    18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:12:21.339   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:12:21.340  INFO: running /root/vhost_test/vms/1/run.sh
00:12:21.340   18:33:07 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:12:21.340  Running VM in /root/vhost_test/vms/1
00:12:21.340  [2024-11-17 18:33:07.855661] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:12:21.601  Waiting for QEMU pid file
00:12:22.538  === qemu.log ===
00:12:22.538  === qemu.log ===
00:12:22.538   18:33:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:12:22.538   18:33:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:12:22.538   18:33:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:12:22.538   18:33:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:12:22.538   18:33:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:12:22.538   18:33:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:22.538  INFO: Waiting for VMs to boot
00:12:22.538  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:12:37.422  [2024-11-17 18:33:22.954251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:59.375  
00:12:59.375  INFO: VM1 ready
00:12:59.375  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:59.375  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:59.375  INFO: all VMs ready
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_scsi 1
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:12:59.375  	for entry in /sys/block/sd*; do
00:12:59.375  		disk_type="$(cat $entry/device/vendor)";
00:12:59.375  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:59.375  			fname=$(basename $entry);
00:12:59.375  			echo -n " $fname";
00:12:59.375  		fi;
00:12:59.375  	done'
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:12:59.375  	for entry in /sys/block/sd*; do
00:12:59.375  		disk_type="$(cat $entry/device/vendor)";
00:12:59.375  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:59.375  			fname=$(basename $entry);
00:12:59.375  			echo -n " $fname";
00:12:59.375  		fi;
00:12:59.375  	done'
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:59.375     18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:59.375     18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:59.375     18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.375     18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:59.375     18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.375     18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:12:59.375  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=' sdb'
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[  sdb != \ \s\d\b ]]
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:12:59.375  INFO: Shutting down virtual machine...
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:59.375   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:59.375    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=446122
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 446122
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:59.376  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.376    18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:59.376   18:33:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:59.635  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:59.635  INFO: VM1 is shutting down - wait a while to complete
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:59.635  INFO: Waiting for VMs to shutdown...
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:59.635   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:59.636    18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=446122
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 446122
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:59.636   18:33:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:13:01.012   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:13:01.012   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:13:01.012   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:13:01.013    18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=446122
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 446122
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:13:01.013   18:33:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:13:01.949   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:13:01.950   18:33:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:13:02.885  INFO: All VMs successfully shut down
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:13:02.885   18:33:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:13:02.885  [2024-11-17 18:33:49.411661] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:13:04.264    18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:13:04.264    18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:13:04.264    18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:04.264    18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:13:04.264    18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # vhost_pid=435238
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 435238) app'
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 435238) app'
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 435238) app'
00:13:04.264  INFO: killing vhost (PID 435238) app
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@224 -- # kill -INT 435238
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:04.264  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 435238
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:13:04.264  .
00:13:04.264   18:33:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 435238
00:13:05.643  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (435238) - No such process
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@231 -- # break
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@234 -- # kill -0 435238
00:13:05.643  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (435238) - No such process
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@239 -- # kill -0 435238
00:13:05.643  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (435238) - No such process
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@245 -- # is_pid_child 435238
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1668 -- # local pid=435238 _pid
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:13:05.643    18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1667 -- # jobs -pr
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1674 -- # return 1
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@261 -- # return 0
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:13:05.643  
00:13:05.643  real	1m42.949s
00:13:05.643  user	6m46.915s
00:13:05.643  sys	0m1.712s
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:13:05.643  ************************************
00:13:05.643  END TEST vfio_user_virtio_scsi_restart_vm
00:13:05.643  ************************************
00:13:05.643   18:33:51 vfio_user_qemu -- vfio_user/vfio_user.sh@19 -- # run_test vfio_user_virtio_bdevperf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:13:05.643   18:33:51 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:05.643   18:33:51 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:05.643   18:33:51 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:05.643  ************************************
00:13:05.643  START TEST vfio_user_virtio_bdevperf
00:13:05.643  ************************************
00:13:05.643   18:33:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:13:05.643  * Looking for test storage...
00:13:05.644  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:05.644    18:33:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:05.644     18:33:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version
00:13:05.644     18:33:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@345 -- # : 1
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=1
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 1
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=2
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:05.644     18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 2
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # return 0
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:05.644  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:05.644  		--rc genhtml_branch_coverage=1
00:13:05.644  		--rc genhtml_function_coverage=1
00:13:05.644  		--rc genhtml_legend=1
00:13:05.644  		--rc geninfo_all_blocks=1
00:13:05.644  		--rc geninfo_unexecuted_blocks=1
00:13:05.644  		
00:13:05.644  		'
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:05.644  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:05.644  		--rc genhtml_branch_coverage=1
00:13:05.644  		--rc genhtml_function_coverage=1
00:13:05.644  		--rc genhtml_legend=1
00:13:05.644  		--rc geninfo_all_blocks=1
00:13:05.644  		--rc geninfo_unexecuted_blocks=1
00:13:05.644  		
00:13:05.644  		'
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:05.644  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:05.644  		--rc genhtml_branch_coverage=1
00:13:05.644  		--rc genhtml_function_coverage=1
00:13:05.644  		--rc genhtml_legend=1
00:13:05.644  		--rc geninfo_all_blocks=1
00:13:05.644  		--rc geninfo_unexecuted_blocks=1
00:13:05.644  		
00:13:05.644  		'
00:13:05.644    18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:05.644  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:05.644  		--rc genhtml_branch_coverage=1
00:13:05.644  		--rc genhtml_function_coverage=1
00:13:05.644  		--rc genhtml_legend=1
00:13:05.644  		--rc geninfo_all_blocks=1
00:13:05.644  		--rc geninfo_unexecuted_blocks=1
00:13:05.644  		
00:13:05.644  		'
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@9 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@11 -- # vfu_dir=/tmp/vfu_devices
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@12 -- # rm -rf /tmp/vfu_devices
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@13 -- # mkdir -p /tmp/vfu_devices
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0xf -L vfu_virtio
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@17 -- # spdk_tgt_pid=453936
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@18 -- # waitforlisten 453936
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 453936 ']'
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:05.644  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:05.644   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:05.644  [2024-11-17 18:33:52.144639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:13:05.644  [2024-11-17 18:33:52.144770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453936 ]
00:13:05.644  EAL: No free 2048 kB hugepages reported on node 1
00:13:05.903  [2024-11-17 18:33:52.250272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:05.903  [2024-11-17 18:33:52.292267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:05.903  [2024-11-17 18:33:52.292341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:05.903  [2024-11-17 18:33:52.292350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:05.903  [2024-11-17 18:33:52.292396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:13:06.470   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:06.470   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:13:06.470   18:33:52 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc0 64 512
00:13:06.729  malloc0
00:13:06.729   18:33:53 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc1 64 512
00:13:06.989  malloc1
00:13:06.989   18:33:53 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@22 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc2 64 512
00:13:07.248  malloc2
00:13:07.248   18:33:53 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_tgt_set_base_path /tmp/vfu_devices
00:13:07.507   18:33:53 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_blk_endpoint vfu.blk --bdev-name malloc0 --cpumask=0x1 --num-queues=2 --qsize=256 --packed-ring
00:13:07.766  [2024-11-17 18:33:54.143141] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.blk_bar4, devmem_fd 466
00:13:07.766  [2024-11-17 18:33:54.143182] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.blk: get device information, fd 466
00:13:07.766  [2024-11-17 18:33:54.143322] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 0
00:13:07.766  [2024-11-17 18:33:54.143357] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 1
00:13:07.766  [2024-11-17 18:33:54.143371] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 2
00:13:07.766  [2024-11-17 18:33:54.143381] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 3
00:13:07.766   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_scsi_endpoint vfu.scsi --cpumask 0x2 --num-io-queues=2 --qsize=256 --packed-ring
00:13:08.025  [2024-11-17 18:33:54.351997] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.scsi_bar4, devmem_fd 567
00:13:08.025  [2024-11-17 18:33:54.352029] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get device information, fd 567
00:13:08.025  [2024-11-17 18:33:54.352088] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 0
00:13:08.025  [2024-11-17 18:33:54.352104] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 1
00:13:08.025  [2024-11-17 18:33:54.352112] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 2
00:13:08.025  [2024-11-17 18:33:54.352121] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 3
00:13:08.025   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=0 --bdev-name malloc1
00:13:08.025  [2024-11-17 18:33:54.548711] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 0 using bdev 'malloc1'
00:13:08.025   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=1 --bdev-name malloc2
00:13:08.283  [2024-11-17 18:33:54.781658] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 1 using bdev 'malloc2'
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@37 -- # bdevperf=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@38 -- # bdevperf_rpc_sock=/tmp/bdevperf.sock
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@41 -- # bdevperf_pid=454559
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@42 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf -r /tmp/bdevperf.sock -g -s 2048 -q 256 -o 4096 -w randrw -M 50 -t 30 -m 0xf0 -L vfio_pci -L virtio_vfio_user
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@43 -- # waitforlisten 454559 /tmp/bdevperf.sock
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 454559 ']'
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/bdevperf.sock
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...'
00:13:08.283  Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:08.283   18:33:54 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:08.541  [2024-11-17 18:33:54.883607] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:13:08.541  [2024-11-17 18:33:54.883722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf0 -m 2048 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454559 ]
00:13:08.541  EAL: No free 2048 kB hugepages reported on node 1
00:13:09.108  [2024-11-17 18:33:55.642642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:09.366  [2024-11-17 18:33:55.686515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:13:09.366  [2024-11-17 18:33:55.686595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:13:09.366  [2024-11-17 18:33:55.686617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:13:09.366  [2024-11-17 18:33:55.686666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:13:09.366   18:33:55 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:09.366   18:33:55 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:13:09.366   18:33:55 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type scsi --trtype vfio-user --traddr /tmp/vfu_devices/vfu.scsi VirtioScsi0
00:13:09.627  [2024-11-17 18:33:55.946811] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.scsi: attached successfully
00:13:09.627  [2024-11-17 18:33:55.948962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.949965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.950953] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.951954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.952980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:09.627  [2024-11-17 18:33:55.953037] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f1cadb01000
00:13:09.627  [2024-11-17 18:33:55.954012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.955008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.956004] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.957026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.958015] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.627  [2024-11-17 18:33:55.959657] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:09.627  [2024-11-17 18:33:55.969820] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /tmp/vfu_devices/vfu.scsi Setup Successfully
00:13:09.627  [2024-11-17 18:33:55.971114] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x4
00:13:09.627  [2024-11-17 18:33:55.972117] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x2000-0x2003, len = 4
00:13:09.627  [2024-11-17 18:33:55.972150] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:09.627  [2024-11-17 18:33:55.973116] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:09.627  [2024-11-17 18:33:55.973135] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:13:09.627  [2024-11-17 18:33:55.973146] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:13:09.628  [2024-11-17 18:33:55.973157] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:13:09.628  [2024-11-17 18:33:55.974121] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.974138] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:13:09.628  [2024-11-17 18:33:55.974160] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:09.628  [2024-11-17 18:33:55.975127] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.975144] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:13:09.628  [2024-11-17 18:33:55.975165] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:09.628  [2024-11-17 18:33:55.975185] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:13:09.628  [2024-11-17 18:33:55.976132] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.976148] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x1
00:13:09.628  [2024-11-17 18:33:55.976157] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:13:09.628  [2024-11-17 18:33:55.977143] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.977159] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:13:09.628  [2024-11-17 18:33:55.977180] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:09.628  [2024-11-17 18:33:55.978144] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.978158] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:13:09.628  [2024-11-17 18:33:55.978194] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:09.628  [2024-11-17 18:33:55.978241] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:13:09.628  [2024-11-17 18:33:55.979162] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.979175] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x3
00:13:09.628  [2024-11-17 18:33:55.979186] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:13:09.628  [2024-11-17 18:33:55.980169] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.980185] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:13:09.628  [2024-11-17 18:33:55.980221] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:09.628  [2024-11-17 18:33:55.981176] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:13:09.628  [2024-11-17 18:33:55.981193] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x0
00:13:09.628  [2024-11-17 18:33:55.982173] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:13:09.628  [2024-11-17 18:33:55.982190] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_LO with 0x10000007
00:13:09.628  [2024-11-17 18:33:55.983204] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:13:09.628  [2024-11-17 18:33:55.983237] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x1
00:13:09.628  [2024-11-17 18:33:55.984196] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:13:09.628  [2024-11-17 18:33:55.984215] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_HI with 0x5
00:13:09.628  [2024-11-17 18:33:55.984264] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10000007
00:13:09.628  [2024-11-17 18:33:55.985211] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:13:09.628  [2024-11-17 18:33:55.985242] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x0
00:13:09.628  [2024-11-17 18:33:55.986216] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:13:09.628  [2024-11-17 18:33:55.986250] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_LO with 0x3
00:13:09.628  [2024-11-17 18:33:55.986260] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x3
00:13:09.628  [2024-11-17 18:33:55.987227] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:13:09.628  [2024-11-17 18:33:55.987259] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x1
00:13:09.628  [2024-11-17 18:33:55.988247] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:13:09.628  [2024-11-17 18:33:55.988277] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_HI with 0x1
00:13:09.628  [2024-11-17 18:33:55.988304] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x100000003
00:13:09.628  [2024-11-17 18:33:55.988373] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100000003
00:13:09.628  [2024-11-17 18:33:55.989237] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.989268] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:13:09.628  [2024-11-17 18:33:55.989327] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:09.628  [2024-11-17 18:33:55.989407] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:13:09.628  [2024-11-17 18:33:55.990244] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.990279] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xb
00:13:09.628  [2024-11-17 18:33:55.990288] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:13:09.628  [2024-11-17 18:33:55.991259] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.628  [2024-11-17 18:33:55.991292] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:13:09.628  [2024-11-17 18:33:55.991323] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:09.628  [2024-11-17 18:33:55.992272] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.628  [2024-11-17 18:33:55.992303] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:09.628  [2024-11-17 18:33:55.993277] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:09.628  [2024-11-17 18:33:55.993309] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:13:09.628  [2024-11-17 18:33:55.993379] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:13:09.628  [2024-11-17 18:33:55.994284] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.628  [2024-11-17 18:33:55.994315] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:09.628  [2024-11-17 18:33:55.995291] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:09.628  [2024-11-17 18:33:55.995321] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x6a601000
00:13:09.628  [2024-11-17 18:33:55.996300] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:09.628  [2024-11-17 18:33:55.996331] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:13:09.628  [2024-11-17 18:33:55.997303] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:09.628  [2024-11-17 18:33:55.997333] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x6a602000
00:13:09.628  [2024-11-17 18:33:55.998314] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:09.628  [2024-11-17 18:33:55.998345] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:09.628  [2024-11-17 18:33:55.999344] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:09.628  [2024-11-17 18:33:55.999360] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x6a603000
00:13:09.628  [2024-11-17 18:33:56.000331] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:09.628  [2024-11-17 18:33:56.000362] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:13:09.628  [2024-11-17 18:33:56.001342] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:09.628  [2024-11-17 18:33:56.001373] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x0
00:13:09.628  [2024-11-17 18:33:56.002352] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:09.628  [2024-11-17 18:33:56.002382] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:09.628  [2024-11-17 18:33:56.002396] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 0
00:13:09.628  [2024-11-17 18:33:56.002404] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 0
00:13:09.628  [2024-11-17 18:33:56.002433] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 0 successfully
00:13:09.628  [2024-11-17 18:33:56.002482] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:13:09.628  [2024-11-17 18:33:56.002518] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a601000
00:13:09.628  [2024-11-17 18:33:56.002532] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a602000
00:13:09.628  [2024-11-17 18:33:56.002547] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a603000
00:13:09.628  [2024-11-17 18:33:56.003362] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.628  [2024-11-17 18:33:56.003394] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:09.629  [2024-11-17 18:33:56.004375] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:09.629  [2024-11-17 18:33:56.004407] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:13:09.629  [2024-11-17 18:33:56.004454] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:13:09.629  [2024-11-17 18:33:56.005377] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.629  [2024-11-17 18:33:56.005411] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:09.629  [2024-11-17 18:33:56.006395] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:09.629  [2024-11-17 18:33:56.006428] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x6a2ec000
00:13:09.629  [2024-11-17 18:33:56.007397] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:09.629  [2024-11-17 18:33:56.007430] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:13:09.629  [2024-11-17 18:33:56.008427] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:09.629  [2024-11-17 18:33:56.008443] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x6a2ed000
00:13:09.629  [2024-11-17 18:33:56.009406] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:09.629  [2024-11-17 18:33:56.009438] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:09.629  [2024-11-17 18:33:56.010412] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:09.629  [2024-11-17 18:33:56.010444] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x6a2ee000
00:13:09.629  [2024-11-17 18:33:56.011426] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:09.629  [2024-11-17 18:33:56.011457] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:13:09.629  [2024-11-17 18:33:56.012434] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:09.629  [2024-11-17 18:33:56.012465] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x1
00:13:09.629  [2024-11-17 18:33:56.013444] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:09.629  [2024-11-17 18:33:56.013461] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:09.629  [2024-11-17 18:33:56.013475] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 1
00:13:09.629  [2024-11-17 18:33:56.013484] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 1
00:13:09.629  [2024-11-17 18:33:56.013497] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 1 successfully
00:13:09.629  [2024-11-17 18:33:56.013531] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:13:09.629  [2024-11-17 18:33:56.013565] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2ec000
00:13:09.629  [2024-11-17 18:33:56.013583] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2ed000
00:13:09.629  [2024-11-17 18:33:56.013596] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2ee000
00:13:09.629  [2024-11-17 18:33:56.014458] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.629  [2024-11-17 18:33:56.014488] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:13:09.629  [2024-11-17 18:33:56.015468] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:09.629  [2024-11-17 18:33:56.015498] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 2 PCI_COMMON_Q_SIZE with 0x100
00:13:09.629  [2024-11-17 18:33:56.015554] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 2, size 256
00:13:09.629  [2024-11-17 18:33:56.016473] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.629  [2024-11-17 18:33:56.016503] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:13:09.629  [2024-11-17 18:33:56.017474] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:09.629  [2024-11-17 18:33:56.017505] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCLO with 0x6a2e8000
00:13:09.629  [2024-11-17 18:33:56.018488] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:09.629  [2024-11-17 18:33:56.018518] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCHI with 0x2000
00:13:09.629  [2024-11-17 18:33:56.019491] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:09.629  [2024-11-17 18:33:56.019521] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILLO with 0x6a2e9000
00:13:09.629  [2024-11-17 18:33:56.020500] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:09.629  [2024-11-17 18:33:56.020534] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:09.629  [2024-11-17 18:33:56.021504] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:09.629  [2024-11-17 18:33:56.021534] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDLO with 0x6a2ea000
00:13:09.629  [2024-11-17 18:33:56.022513] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:09.629  [2024-11-17 18:33:56.022543] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDHI with 0x2000
00:13:09.629  [2024-11-17 18:33:56.023522] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:09.629  [2024-11-17 18:33:56.023552] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x2
00:13:09.629  [2024-11-17 18:33:56.024530] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:09.629  [2024-11-17 18:33:56.024560] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:09.629  [2024-11-17 18:33:56.024575] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 2
00:13:09.629  [2024-11-17 18:33:56.024583] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 2
00:13:09.629  [2024-11-17 18:33:56.024594] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 2 successfully
00:13:09.629  [2024-11-17 18:33:56.024651] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 2 addresses:
00:13:09.629  [2024-11-17 18:33:56.024701] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2e8000
00:13:09.629  [2024-11-17 18:33:56.024721] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2e9000
00:13:09.629  [2024-11-17 18:33:56.024742] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2ea000
00:13:09.629  [2024-11-17 18:33:56.025541] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.629  [2024-11-17 18:33:56.025573] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:13:09.629  [2024-11-17 18:33:56.026544] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:09.629  [2024-11-17 18:33:56.026578] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 3 PCI_COMMON_Q_SIZE with 0x100
00:13:09.629  [2024-11-17 18:33:56.026626] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 3, size 256
00:13:09.629  [2024-11-17 18:33:56.027555] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:09.629  [2024-11-17 18:33:56.027586] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:13:09.629  [2024-11-17 18:33:56.028567] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:09.629  [2024-11-17 18:33:56.028599] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCLO with 0x6a2e4000
00:13:09.629  [2024-11-17 18:33:56.029574] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:09.629  [2024-11-17 18:33:56.029606] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCHI with 0x2000
00:13:09.629  [2024-11-17 18:33:56.030584] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:09.630  [2024-11-17 18:33:56.030618] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILLO with 0x6a2e5000
00:13:09.630  [2024-11-17 18:33:56.031588] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:09.630  [2024-11-17 18:33:56.031620] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:09.630  [2024-11-17 18:33:56.032596] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:09.630  [2024-11-17 18:33:56.032627] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDLO with 0x6a2e6000
00:13:09.630  [2024-11-17 18:33:56.033607] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:09.630  [2024-11-17 18:33:56.033638] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDHI with 0x2000
00:13:09.630  [2024-11-17 18:33:56.034615] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:09.630  [2024-11-17 18:33:56.034650] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x3
00:13:09.630  [2024-11-17 18:33:56.035630] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:09.630  [2024-11-17 18:33:56.035661] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:09.630  [2024-11-17 18:33:56.035672] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 3
00:13:09.630  [2024-11-17 18:33:56.035681] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 3
00:13:09.630  [2024-11-17 18:33:56.035691] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 3 successfully
00:13:09.630  [2024-11-17 18:33:56.035746] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 3 addresses:
00:13:09.630  [2024-11-17 18:33:56.035790] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2e4000
00:13:09.630  [2024-11-17 18:33:56.035814] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2e5000
00:13:09.630  [2024-11-17 18:33:56.035832] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2e6000
00:13:09.630  [2024-11-17 18:33:56.036637] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.630  [2024-11-17 18:33:56.036667] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:13:09.630  [2024-11-17 18:33:56.036729] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:09.630  [2024-11-17 18:33:56.036776] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:13:09.630  [2024-11-17 18:33:56.037650] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:09.630  [2024-11-17 18:33:56.037679] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xf
00:13:09.630  [2024-11-17 18:33:56.037690] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:13:09.630  [2024-11-17 18:33:56.037697] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.scsi
00:13:09.630  [2024-11-17 18:33:56.039982] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.scsi is started with ret 0
00:13:09.630  [2024-11-17 18:33:56.041044] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:09.630  [2024-11-17 18:33:56.041062] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xf
00:13:09.630  [2024-11-17 18:33:56.041120] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:13:09.630  VirtioScsi0t0 VirtioScsi0t1
00:13:09.630   18:33:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type blk --trtype vfio-user --traddr /tmp/vfu_devices/vfu.blk VirtioBlk0
00:13:09.891  [2024-11-17 18:33:56.275711] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.blk: attached successfully
00:13:09.891  [2024-11-17 18:33:56.277872] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.278849] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.279850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.280869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.281888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:09.891  [2024-11-17 18:33:56.281955] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f1cadb00000
00:13:09.891  [2024-11-17 18:33:56.282876] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.283944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.284934] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.285897] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.286922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:09.891  [2024-11-17 18:33:56.288529] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:09.891  [2024-11-17 18:33:56.298607] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user1, Path /tmp/vfu_devices/vfu.blk Setup Successfully
00:13:09.891  [2024-11-17 18:33:56.300006] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:09.891  [2024-11-17 18:33:56.300994] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.301023] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:13:09.891  [2024-11-17 18:33:56.301036] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:13:09.891  [2024-11-17 18:33:56.301045] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:13:09.891  [2024-11-17 18:33:56.302007] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.302022] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:13:09.891  [2024-11-17 18:33:56.302055] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:09.891  [2024-11-17 18:33:56.303012] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.303029] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:13:09.891  [2024-11-17 18:33:56.303069] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:09.891  [2024-11-17 18:33:56.303100] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:13:09.891  [2024-11-17 18:33:56.304031] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.304045] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x1
00:13:09.891  [2024-11-17 18:33:56.304058] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:13:09.891  [2024-11-17 18:33:56.305026] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.305042] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:13:09.891  [2024-11-17 18:33:56.305077] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:09.891  [2024-11-17 18:33:56.306037] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.306054] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:13:09.891  [2024-11-17 18:33:56.306081] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:09.891  [2024-11-17 18:33:56.306113] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:13:09.891  [2024-11-17 18:33:56.307052] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.307068] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x3
00:13:09.891  [2024-11-17 18:33:56.307076] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:13:09.891  [2024-11-17 18:33:56.308056] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.308070] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:13:09.891  [2024-11-17 18:33:56.308114] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:09.891  [2024-11-17 18:33:56.309072] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:13:09.891  [2024-11-17 18:33:56.309085] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x0
00:13:09.891  [2024-11-17 18:33:56.310080] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:13:09.891  [2024-11-17 18:33:56.310097] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_LO with 0x10007646
00:13:09.891  [2024-11-17 18:33:56.311089] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:13:09.891  [2024-11-17 18:33:56.311103] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x1
00:13:09.891  [2024-11-17 18:33:56.312094] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:13:09.891  [2024-11-17 18:33:56.312108] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_HI with 0x5
00:13:09.891  [2024-11-17 18:33:56.312144] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10007646
00:13:09.891  [2024-11-17 18:33:56.313109] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:13:09.891  [2024-11-17 18:33:56.313122] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x0
00:13:09.891  [2024-11-17 18:33:56.314120] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:13:09.891  [2024-11-17 18:33:56.314133] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_LO with 0x3446
00:13:09.891  [2024-11-17 18:33:56.314147] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x3446
00:13:09.891  [2024-11-17 18:33:56.315122] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:13:09.891  [2024-11-17 18:33:56.315138] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x1
00:13:09.891  [2024-11-17 18:33:56.316136] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:13:09.891  [2024-11-17 18:33:56.316152] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_HI with 0x1
00:13:09.891  [2024-11-17 18:33:56.316162] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x100003446
00:13:09.891  [2024-11-17 18:33:56.316200] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100003446
00:13:09.891  [2024-11-17 18:33:56.317145] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.317161] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:13:09.891  [2024-11-17 18:33:56.317194] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:09.891  [2024-11-17 18:33:56.317217] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:13:09.891  [2024-11-17 18:33:56.318153] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.318166] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xb
00:13:09.891  [2024-11-17 18:33:56.318176] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:13:09.891  [2024-11-17 18:33:56.319158] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.891  [2024-11-17 18:33:56.319175] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:13:09.891  [2024-11-17 18:33:56.319211] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:09.891  [2024-11-17 18:33:56.319236] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:13:09.892  [2024-11-17 18:33:56.320165] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:13:09.892  [2024-11-17 18:33:56.320207] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x14, length 0x4
00:13:09.892  [2024-11-17 18:33:56.321183] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2014-0x2017, len = 4
00:13:09.892  [2024-11-17 18:33:56.321217] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x8
00:13:09.892  [2024-11-17 18:33:56.322196] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2000-0x2007, len = 8
00:13:09.892  [2024-11-17 18:33:56.322260] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:13:09.892  [2024-11-17 18:33:56.323226] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:13:09.892  [2024-11-17 18:33:56.323284] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x8, length 0x4
00:13:09.892  [2024-11-17 18:33:56.324230] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2008-0x200B, len = 4
00:13:09.892  [2024-11-17 18:33:56.324288] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0xc, length 0x4
00:13:09.892  [2024-11-17 18:33:56.325238] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x200C-0x200F, len = 4
00:13:09.892  [2024-11-17 18:33:56.326249] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:09.892  [2024-11-17 18:33:56.326288] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:09.892  [2024-11-17 18:33:56.327261] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:13:09.892  [2024-11-17 18:33:56.327294] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:13:09.892  [2024-11-17 18:33:56.327341] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:13:09.892  [2024-11-17 18:33:56.328285] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:09.892  [2024-11-17 18:33:56.328321] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:09.892  [2024-11-17 18:33:56.329292] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:13:09.892  [2024-11-17 18:33:56.329325] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x6a2e0000
00:13:09.892  [2024-11-17 18:33:56.330307] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:13:09.892  [2024-11-17 18:33:56.330340] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:13:09.892  [2024-11-17 18:33:56.331317] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:13:09.892  [2024-11-17 18:33:56.331350] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x6a2e1000
00:13:09.892  [2024-11-17 18:33:56.332330] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:13:09.892  [2024-11-17 18:33:56.332364] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:09.892  [2024-11-17 18:33:56.333335] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:13:09.892  [2024-11-17 18:33:56.333369] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x6a2e2000
00:13:09.892  [2024-11-17 18:33:56.334354] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:13:09.892  [2024-11-17 18:33:56.334369] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:13:09.892  [2024-11-17 18:33:56.335356] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:13:09.892  [2024-11-17 18:33:56.335392] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x0
00:13:09.892  [2024-11-17 18:33:56.336366] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:09.892  [2024-11-17 18:33:56.336401] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:09.892  [2024-11-17 18:33:56.336411] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 0
00:13:09.892  [2024-11-17 18:33:56.336427] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 0
00:13:09.892  [2024-11-17 18:33:56.336446] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 0 successfully
00:13:09.892  [2024-11-17 18:33:56.336505] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:13:09.892  [2024-11-17 18:33:56.336550] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2e0000
00:13:09.892  [2024-11-17 18:33:56.336582] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2e1000
00:13:09.892  [2024-11-17 18:33:56.336601] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2e2000
00:13:09.892  [2024-11-17 18:33:56.337375] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:09.892  [2024-11-17 18:33:56.337406] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:09.892  [2024-11-17 18:33:56.338390] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:13:09.892  [2024-11-17 18:33:56.338420] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:13:09.892  [2024-11-17 18:33:56.338464] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:13:09.892  [2024-11-17 18:33:56.339402] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:09.892  [2024-11-17 18:33:56.339431] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:09.892  [2024-11-17 18:33:56.340417] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:13:09.892  [2024-11-17 18:33:56.340448] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x6a2dc000
00:13:09.892  [2024-11-17 18:33:56.341419] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:13:09.892  [2024-11-17 18:33:56.341433] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:13:09.892  [2024-11-17 18:33:56.342430] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:13:09.892  [2024-11-17 18:33:56.342462] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x6a2dd000
00:13:09.892  [2024-11-17 18:33:56.343442] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:13:09.892  [2024-11-17 18:33:56.343473] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:09.892  [2024-11-17 18:33:56.344455] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:13:09.892  [2024-11-17 18:33:56.344487] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x6a2de000
00:13:09.892  [2024-11-17 18:33:56.345472] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:13:09.892  [2024-11-17 18:33:56.345502] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:13:09.892  [2024-11-17 18:33:56.346477] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:13:09.892  [2024-11-17 18:33:56.346507] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x1
00:13:09.892  [2024-11-17 18:33:56.347488] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:09.892  [2024-11-17 18:33:56.347517] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:09.892  [2024-11-17 18:33:56.347528] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 1
00:13:09.892  [2024-11-17 18:33:56.347535] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 1
00:13:09.892  [2024-11-17 18:33:56.347547] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 1 successfully
00:13:09.892  [2024-11-17 18:33:56.347587] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:13:09.892  [2024-11-17 18:33:56.347632] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2dc000
00:13:09.892  [2024-11-17 18:33:56.347649] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2dd000
00:13:09.892  [2024-11-17 18:33:56.347665] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2de000
00:13:09.892  [2024-11-17 18:33:56.348495] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.892  [2024-11-17 18:33:56.348529] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:13:09.892  [2024-11-17 18:33:56.348571] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:09.892  [2024-11-17 18:33:56.348620] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:13:09.892  [2024-11-17 18:33:56.349507] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:09.892  [2024-11-17 18:33:56.349542] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xf
00:13:09.892  [2024-11-17 18:33:56.349550] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:13:09.892  [2024-11-17 18:33:56.349559] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.blk
00:13:09.892  [2024-11-17 18:33:56.351698] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.blk is started with ret 0
00:13:09.892  [2024-11-17 18:33:56.352786] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:09.892  [2024-11-17 18:33:56.352820] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xf
00:13:09.892  [2024-11-17 18:33:56.352872] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:13:09.892  VirtioBlk0
00:13:09.892   18:33:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /tmp/bdevperf.sock perform_tests
00:13:10.152  Running I/O for 30 seconds...
00:13:12.028      95730.00 IOPS,   373.95 MiB/s
[2024-11-17T17:33:59.541Z]     95923.00 IOPS,   374.70 MiB/s
[2024-11-17T17:34:00.919Z]     95991.33 IOPS,   374.97 MiB/s
[2024-11-17T17:34:01.856Z]     95878.00 IOPS,   374.52 MiB/s
[2024-11-17T17:34:02.792Z]     95716.00 IOPS,   373.89 MiB/s
[2024-11-17T17:34:03.730Z]     95771.67 IOPS,   374.11 MiB/s
[2024-11-17T17:34:04.666Z]     95811.57 IOPS,   374.26 MiB/s
[2024-11-17T17:34:05.603Z]     95830.25 IOPS,   374.34 MiB/s
[2024-11-17T17:34:06.540Z]     95849.11 IOPS,   374.41 MiB/s
[2024-11-17T17:34:07.919Z]     95861.70 IOPS,   374.46 MiB/s
[2024-11-17T17:34:08.857Z]     95877.82 IOPS,   374.52 MiB/s
[2024-11-17T17:34:09.794Z]     95897.75 IOPS,   374.60 MiB/s
[2024-11-17T17:34:10.731Z]     95910.00 IOPS,   374.65 MiB/s
[2024-11-17T17:34:11.667Z]     95871.86 IOPS,   374.50 MiB/s
[2024-11-17T17:34:12.605Z]     95843.13 IOPS,   374.39 MiB/s
[2024-11-17T17:34:13.542Z]     95851.75 IOPS,   374.42 MiB/s
[2024-11-17T17:34:14.921Z]     95855.41 IOPS,   374.44 MiB/s
[2024-11-17T17:34:15.858Z]     95858.61 IOPS,   374.45 MiB/s
[2024-11-17T17:34:16.796Z]     95863.68 IOPS,   374.47 MiB/s
[2024-11-17T17:34:17.733Z]     95865.65 IOPS,   374.48 MiB/s
[2024-11-17T17:34:18.671Z]     95870.76 IOPS,   374.50 MiB/s
[2024-11-17T17:34:19.607Z]     95874.82 IOPS,   374.51 MiB/s
[2024-11-17T17:34:20.545Z]     95876.78 IOPS,   374.52 MiB/s
[2024-11-17T17:34:21.923Z]     95854.33 IOPS,   374.43 MiB/s
[2024-11-17T17:34:22.860Z]     95832.84 IOPS,   374.35 MiB/s
[2024-11-17T17:34:23.796Z]     95834.19 IOPS,   374.35 MiB/s
[2024-11-17T17:34:24.735Z]     95836.37 IOPS,   374.36 MiB/s
[2024-11-17T17:34:25.671Z]     95836.71 IOPS,   374.36 MiB/s
[2024-11-17T17:34:26.607Z]     95840.79 IOPS,   374.38 MiB/s
[2024-11-17T17:34:26.607Z]     95844.30 IOPS,   374.39 MiB/s
00:13:40.031                                                                                                  Latency(us)
00:13:40.031  
[2024-11-17T17:34:26.607Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:40.031  Job: VirtioScsi0t0 (Core Mask 0x10, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:13:40.031  	 VirtioScsi0t0       :      30.01   22140.56      86.49       0.00     0.00   11555.20    1839.48   13583.83
00:13:40.031  Job: VirtioScsi0t1 (Core Mask 0x20, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:13:40.031  	 VirtioScsi0t1       :      30.01   22139.95      86.48       0.00     0.00   11555.71    1809.69   13524.25
00:13:40.031  Job: VirtioBlk0 (Core Mask 0x40, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:13:40.031  	 VirtioBlk0          :      30.01   51555.56     201.39       0.00     0.00    4960.26    1787.35    6881.28
00:13:40.031  
[2024-11-17T17:34:26.607Z]  ===================================================================================================================
00:13:40.031  
[2024-11-17T17:34:26.607Z]  Total                       :              95836.07     374.36       0.00     0.00    8007.72    1787.35   13583.83
00:13:40.031  {
00:13:40.031    "results": [
00:13:40.031      {
00:13:40.031        "job": "VirtioScsi0t0",
00:13:40.031        "core_mask": "0x10",
00:13:40.031        "workload": "randrw",
00:13:40.031        "percentage": 50,
00:13:40.031        "status": "finished",
00:13:40.031        "queue_depth": 256,
00:13:40.031        "io_size": 4096,
00:13:40.031        "runtime": 30.009224,
00:13:40.031        "iops": 22140.559182736615,
00:13:40.031        "mibps": 86.4865593075649,
00:13:40.031        "io_failed": 0,
00:13:40.031        "io_timeout": 0,
00:13:40.031        "avg_latency_us": 11555.202261260694,
00:13:40.031        "min_latency_us": 1839.4763636363637,
00:13:40.031        "max_latency_us": 13583.825454545455
00:13:40.031      },
00:13:40.031      {
00:13:40.031        "job": "VirtioScsi0t1",
00:13:40.031        "core_mask": "0x20",
00:13:40.031        "workload": "randrw",
00:13:40.031        "percentage": 50,
00:13:40.031        "status": "finished",
00:13:40.031        "queue_depth": 256,
00:13:40.031        "io_size": 4096,
00:13:40.031        "runtime": 30.009553,
00:13:40.031        "iops": 22139.94990195289,
00:13:40.031        "mibps": 86.48417930450347,
00:13:40.031        "io_failed": 0,
00:13:40.031        "io_timeout": 0,
00:13:40.031        "avg_latency_us": 11555.712114015032,
00:13:40.031        "min_latency_us": 1809.6872727272728,
00:13:40.031        "max_latency_us": 13524.247272727273
00:13:40.031      },
00:13:40.031      {
00:13:40.031        "job": "VirtioBlk0",
00:13:40.031        "core_mask": "0x40",
00:13:40.031        "workload": "randrw",
00:13:40.031        "percentage": 50,
00:13:40.031        "status": "finished",
00:13:40.031        "queue_depth": 256,
00:13:40.031        "io_size": 4096,
00:13:40.031        "runtime": 30.005937,
00:13:40.031        "iops": 51555.563820586576,
00:13:40.031        "mibps": 201.3889211741663,
00:13:40.031        "io_failed": 0,
00:13:40.031        "io_timeout": 0,
00:13:40.031        "avg_latency_us": 4960.262994155801,
00:13:40.031        "min_latency_us": 1787.3454545454545,
00:13:40.031        "max_latency_us": 6881.28
00:13:40.031      }
00:13:40.031    ],
00:13:40.031    "core_count": 3
00:13:40.031  }
00:13:40.031   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@52 -- # killprocess 454559
00:13:40.031   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 454559 ']'
00:13:40.031   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 454559
00:13:40.031    18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:13:40.032   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:40.032    18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 454559
00:13:40.291   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:13:40.291   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:13:40.291   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 454559'
00:13:40.291  killing process with pid 454559
00:13:40.291   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 454559
00:13:40.291  Received shutdown signal, test time was about 30.000000 seconds
00:13:40.291  
00:13:40.291                                                                                                  Latency(us)
00:13:40.291  
[2024-11-17T17:34:26.867Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:40.291  
[2024-11-17T17:34:26.867Z]  ===================================================================================================================
00:13:40.291  
[2024-11-17T17:34:26.867Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:40.291  [2024-11-17 18:34:26.609307] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:40.291   18:34:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 454559
00:13:40.291  [2024-11-17 18:34:26.609510] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:40.291  [2024-11-17 18:34:26.609550] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:13:40.291  [2024-11-17 18:34:26.609564] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:13:40.291  [2024-11-17 18:34:26.609573] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:13:40.291  [2024-11-17 18:34:26.609589] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 0
00:13:40.292  [2024-11-17 18:34:26.609600] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 1
00:13:40.292  [2024-11-17 18:34:26.609611] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:13:40.292  [2024-11-17 18:34:26.610495] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:40.292  [2024-11-17 18:34:26.610517] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:13:40.292  [2024-11-17 18:34:26.610544] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:40.292  [2024-11-17 18:34:26.611497] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:40.292  [2024-11-17 18:34:26.611534] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:40.292  [2024-11-17 18:34:26.612498] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:40.292  [2024-11-17 18:34:26.612534] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:40.292  [2024-11-17 18:34:26.612544] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 0
00:13:40.292  [2024-11-17 18:34:26.612562] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:13:40.292  [2024-11-17 18:34:26.613512] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:40.292  [2024-11-17 18:34:26.613547] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:40.292  [2024-11-17 18:34:26.614515] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:40.292  [2024-11-17 18:34:26.614549] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:40.292  [2024-11-17 18:34:26.614558] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 1
00:13:40.292  [2024-11-17 18:34:26.614567] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:13:40.292  [2024-11-17 18:34:26.614633] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.blk
00:13:40.292  [2024-11-17 18:34:26.617280] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:40.292  [2024-11-17 18:34:26.648852] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:13:40.292  [2024-11-17 18:34:26.648882] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:13:40.292  [2024-11-17 18:34:26.648896] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:13:40.292  [2024-11-17 18:34:26.648917] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:40.292  [2024-11-17 18:34:26.648939] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.blk
00:13:40.292  [2024-11-17 18:34:26.648949] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:13:40.292  [2024-11-17 18:34:26.648966] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:13:40.292  [2024-11-17 18:34:26.649319] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:40.292  [2024-11-17 18:34:26.649376] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:13:40.292  [2024-11-17 18:34:26.649388] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:13:40.292  [2024-11-17 18:34:26.649402] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:13:40.292  [2024-11-17 18:34:26.649419] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 0
00:13:40.292  [2024-11-17 18:34:26.649433] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 1
00:13:40.292  [2024-11-17 18:34:26.649441] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 2
00:13:40.292  [2024-11-17 18:34:26.649450] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 3
00:13:40.292  [2024-11-17 18:34:26.649458] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:13:40.292  [2024-11-17 18:34:26.650318] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:40.292  [2024-11-17 18:34:26.650354] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:13:40.292  [2024-11-17 18:34:26.650394] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:40.292  [2024-11-17 18:34:26.651328] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:40.292  [2024-11-17 18:34:26.651359] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:40.292  [2024-11-17 18:34:26.652338] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:40.292  [2024-11-17 18:34:26.652367] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:40.292  [2024-11-17 18:34:26.652378] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 0
00:13:40.292  [2024-11-17 18:34:26.652392] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:13:40.292  [2024-11-17 18:34:26.653348] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:40.292  [2024-11-17 18:34:26.653381] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:40.292  [2024-11-17 18:34:26.654352] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:40.292  [2024-11-17 18:34:26.654384] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:40.292  [2024-11-17 18:34:26.654394] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 1
00:13:40.292  [2024-11-17 18:34:26.654401] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:13:40.292  [2024-11-17 18:34:26.655356] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:40.292  [2024-11-17 18:34:26.655387] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:13:40.292  [2024-11-17 18:34:26.656367] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:40.292  [2024-11-17 18:34:26.656398] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:40.292  [2024-11-17 18:34:26.656408] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 2
00:13:40.292  [2024-11-17 18:34:26.656415] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 2 isn't enabled
00:13:40.292  [2024-11-17 18:34:26.657380] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:40.292  [2024-11-17 18:34:26.657411] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:13:40.292  [2024-11-17 18:34:26.658388] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:40.292  [2024-11-17 18:34:26.658417] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:40.292  [2024-11-17 18:34:26.658433] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 3
00:13:40.292  [2024-11-17 18:34:26.658440] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 3 isn't enabled
00:13:40.292  [2024-11-17 18:34:26.658503] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.scsi
00:13:40.292  [2024-11-17 18:34:26.661085] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:40.292  [2024-11-17 18:34:26.691615] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:13:40.292  [2024-11-17 18:34:26.691633] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:13:40.292  [2024-11-17 18:34:26.691644] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:13:40.292  [2024-11-17 18:34:26.691663] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.scsi
00:13:40.292  [2024-11-17 18:34:26.691678] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:13:40.292  [2024-11-17 18:34:26.691685] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:13:40.860   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@53 -- # trap - SIGINT SIGTERM EXIT
00:13:40.860   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.blk
00:13:41.118  [2024-11-17 18:34:27.535569] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.blk
00:13:41.118   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.scsi
00:13:41.377  [2024-11-17 18:34:27.772472] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.scsi
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@59 -- # killprocess 453936
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 453936 ']'
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 453936
00:13:41.377    18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:41.377    18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453936
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453936'
00:13:41.377  killing process with pid 453936
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 453936
00:13:41.377   18:34:27 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 453936
00:13:41.946  
00:13:41.946  real	0m36.436s
00:13:41.946  user	4m30.382s
00:13:41.946  sys	0m2.079s
00:13:41.946   18:34:28 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:41.946   18:34:28 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:41.946  ************************************
00:13:41.946  END TEST vfio_user_virtio_bdevperf
00:13:41.946  ************************************
00:13:41.946   18:34:28 vfio_user_qemu -- vfio_user/vfio_user.sh@20 -- # [[ y == y ]]
00:13:41.946   18:34:28 vfio_user_qemu -- vfio_user/vfio_user.sh@21 -- # run_test vfio_user_virtio_fs_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:13:41.946   18:34:28 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:41.946   18:34:28 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:41.946   18:34:28 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:41.946  ************************************
00:13:41.946  START TEST vfio_user_virtio_fs_fio
00:13:41.946  ************************************
00:13:41.946   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:13:41.946  * Looking for test storage...
00:13:41.946  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1693 -- # lcov --version
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # IFS=.-:
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # read -ra ver1
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # IFS=.-:
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # read -ra ver2
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@338 -- # local 'op=<'
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@340 -- # ver1_l=2
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@341 -- # ver2_l=1
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@344 -- # case "$op" in
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@345 -- # : 1
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # decimal 1
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=1
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 1
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # decimal 2
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=2
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 2
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # return 0
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:41.946  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:41.946  		--rc genhtml_branch_coverage=1
00:13:41.946  		--rc genhtml_function_coverage=1
00:13:41.946  		--rc genhtml_legend=1
00:13:41.946  		--rc geninfo_all_blocks=1
00:13:41.946  		--rc geninfo_unexecuted_blocks=1
00:13:41.946  		
00:13:41.946  		'
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:41.946  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:41.946  		--rc genhtml_branch_coverage=1
00:13:41.946  		--rc genhtml_function_coverage=1
00:13:41.946  		--rc genhtml_legend=1
00:13:41.946  		--rc geninfo_all_blocks=1
00:13:41.946  		--rc geninfo_unexecuted_blocks=1
00:13:41.946  		
00:13:41.946  		'
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:41.946  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:41.946  		--rc genhtml_branch_coverage=1
00:13:41.946  		--rc genhtml_function_coverage=1
00:13:41.946  		--rc genhtml_legend=1
00:13:41.946  		--rc geninfo_all_blocks=1
00:13:41.946  		--rc geninfo_unexecuted_blocks=1
00:13:41.946  		
00:13:41.946  		'
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:41.946  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:41.946  		--rc genhtml_branch_coverage=1
00:13:41.946  		--rc genhtml_function_coverage=1
00:13:41.946  		--rc genhtml_legend=1
00:13:41.946  		--rc geninfo_all_blocks=1
00:13:41.946  		--rc geninfo_unexecuted_blocks=1
00:13:41.946  		
00:13:41.946  		'
00:13:41.946   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@6 -- # : 128
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@7 -- # : 512
00:13:41.946    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@6 -- # : false
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:41.946     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@9 -- # : qemu-img
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:13:42.207       18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:13:42.207     18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:13:42.207      18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:13:42.207       18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:13:42.207        18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:13:42.207        18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:13:42.207        18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:13:42.207        18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:13:42.207       18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:13:42.207    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:42.207    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:13:42.207    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # get_vhost_dir 0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@16 -- # vhosttestinit
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@18 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@20 -- # vfu_tgt_run 0
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@6 -- # local vhost_name=0
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # get_vhost_dir 0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:42.208    18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@17 -- # vfupid=460379
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@18 -- # echo 460379
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@20 -- # echo 'Process pid: 460379'
00:13:42.208  Process pid: 460379
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:13:42.208  waiting for app to run...
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@22 -- # waitforlisten 460379 /root/vhost_test/vhost/0/rpc.sock
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@835 -- # '[' -z 460379 ']'
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:13:42.208  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:42.208   18:34:28 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:42.208  [2024-11-17 18:34:28.666537] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:13:42.208  [2024-11-17 18:34:28.666663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460379 ]
00:13:42.208  EAL: No free 2048 kB hugepages reported on node 1
00:13:42.467  [2024-11-17 18:34:28.903027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:42.467  [2024-11-17 18:34:28.932986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:42.467  [2024-11-17 18:34:28.933030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:42.467  [2024-11-17 18:34:28.933024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:42.467  [2024-11-17 18:34:28.933093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@868 -- # return 0
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@22 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@23 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@24 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@27 -- # disk_no=1
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@28 -- # vm_num=1
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@29 -- # job_file=default_fsdev.job
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@30 -- # be_virtiofs_dir=/tmp/vfio-test.1
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@31 -- # vm_virtiofs_dir=/tmp/virtiofs.1
00:13:43.036   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:13:43.295   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@35 -- # rm -rf /tmp/vfio-test.1
00:13:43.295   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@36 -- # mkdir -p /tmp/vfio-test.1
00:13:43.295    18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # mktemp --tmpdir=/tmp/vfio-test.1
00:13:43.295   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # tmpfile=/tmp/vfio-test.1/tmp.4su7xv1ole
00:13:43.295   18:34:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@41 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock fsdev_aio_create aio.1 /tmp/vfio-test.1
00:13:43.554  aio.1
00:13:43.554   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_fs_endpoint virtio.1 --fsdev-name aio.1 --tag vfu_test.1 --num-queues=2 --qsize=512 --packed-ring
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@45 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@518 -- # xtrace_disable
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:43.814  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:13:43.814  INFO: Creating new VM in /root/vhost_test/vms/1
00:13:43.814  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:13:43.814  INFO: TASK MASK: 6-7
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@671 -- # local node_num=0
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:13:43.814  INFO: NUMA NODE: 0
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # IFS=,
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@704 -- # case $disk_type in
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:43.814   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:43.815  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@785 -- # (( 0 ))
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:13:43.815  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # cat
00:13:43.815    18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@827 -- # echo 10100
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@828 -- # echo 10101
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@829 -- # echo 10102
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@834 -- # echo 10104
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@835 -- # echo 101
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@46 -- # vm_run 1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@843 -- # local run_all=false
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@856 -- # false
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@859 -- # shift 0
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:13:43.815  INFO: running /root/vhost_test/vms/1/run.sh
00:13:43.815   18:34:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:13:43.815  Running VM in /root/vhost_test/vms/1
00:13:44.074  [2024-11-17 18:34:30.600577] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:13:44.333  Waiting for QEMU pid file
00:13:45.270  === qemu.log ===
00:13:45.270  === qemu.log ===
00:13:45.270   18:34:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@47 -- # vm_wait_for_boot 60 1
00:13:45.270   18:34:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@913 -- # assert_number 60
00:13:45.270   18:34:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:13:45.270   18:34:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # return 0
00:13:45.270   18:34:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@915 -- # xtrace_disable
00:13:45.270   18:34:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:45.270  INFO: Waiting for VMs to boot
00:13:45.270  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:14:07.211  
00:14:07.211  INFO: VM1 ready
00:14:07.211  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.211  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.779  INFO: all VMs ready
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@973 -- # return 0
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@49 -- # vm_exec 1 'mkdir /tmp/virtiofs.1'
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mkdir /tmp/virtiofs.1'
00:14:07.779  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@50 -- # vm_exec 1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:07.779    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:07.779   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:14:08.038  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:08.038    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # basename /tmp/vfio-test.1/tmp.4su7xv1ole
00:14:08.038   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # vm_exec 1 'ls /tmp/virtiofs.1/tmp.4su7xv1ole'
00:14:08.038   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:08.038   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.038   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.038   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:08.038   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:08.038    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:08.039    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:08.039    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.039    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.039    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.039    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:08.039   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'ls /tmp/virtiofs.1/tmp.4su7xv1ole'
00:14:08.039  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:08.298  /tmp/virtiofs.1/tmp.4su7xv1ole
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@53 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@978 -- # local readonly=
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@979 -- # local fio_bin=
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@993 -- # shift 1
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:14:08.298  INFO: Starting fio server on VM1
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:08.298    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:08.298    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:08.298    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.298    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.298    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.298    18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:08.298   18:34:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:14:08.298  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:08.564   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:14:08.564   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:08.564   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.564   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.564   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:08.564   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:08.564    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:08.564    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:08.564    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.564    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.564    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.564    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:08.564   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:14:08.564  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@54 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job --out=/root/vhost_test/fio_results --vm=1:/tmp/virtiofs.1/test
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1053 -- # local arg
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1054 -- # local job_file=
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # vms=()
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # local vms
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1057 -- # local out=
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1058 -- # local vm
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.823   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job ]]
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1108 -- # local job_fname
00:14:08.824    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # job_fname=default_fsdev.job
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1110 -- # log_fname=default_fsdev.log
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal '
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1116 -- # local vmdisks=/tmp/virtiofs.1/test
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/tmp/virtiofs.1/test@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_fsdev.job'
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:08.824    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:08.824    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:08.824    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.824    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.824    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.824    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:08.824   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_fsdev.job'
00:14:08.824  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:09.082   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1121 -- # false
00:14:09.083   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_fsdev.job
00:14:09.083   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:09.083   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:09.083   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:09.083   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:09.083   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:09.083    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:09.083    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:09.083    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:09.083    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:09.083    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:09.083    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:09.083   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_fsdev.job
00:14:09.083  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:09.341  [global]
00:14:09.341  blocksize=4k
00:14:09.341  iodepth=512
00:14:09.341  ioengine=libaio
00:14:09.341  size=1G
00:14:09.341  group_reporting
00:14:09.341  thread
00:14:09.341  numjobs=1
00:14:09.341  direct=1
00:14:09.341  invalidate=1
00:14:09.341  rw=randrw
00:14:09.341  do_verify=1
00:14:09.341  filename=/tmp/virtiofs.1/test
00:14:09.341  [job0]
00:14:09.341   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1127 -- # true
00:14:09.341    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:14:09.341    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:14:09.341    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:09.341    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:09.341    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:14:09.341    18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:14:09.341   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_fsdev.job '
00:14:09.341   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1131 -- # true
00:14:09.341   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1147 -- # true
00:14:09.341   18:34:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_fsdev.job
00:14:31.280   18:35:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1162 -- # sleep 1
00:14:31.280   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:14:31.280   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:14:31.280   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_fsdev.log
00:14:31.280  hostname=vhostfedora-cloud-23052, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:14:31.280  <vhostfedora-cloud-23052> job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
00:14:31.280  <vhostfedora-cloud-23052> Starting 1 thread
00:14:31.280  <vhostfedora-cloud-23052> job0: Laying out IO file (1 file / 1024MiB)
00:14:31.280  <vhostfedora-cloud-23052> 
00:14:31.280  job0: (groupid=0, jobs=1): err= 0: pid=969: Sun Nov 17 18:35:14 2024
00:14:31.280    read: IOPS=34.0k, BW=133MiB/s (139MB/s)(512MiB/3859msec)
00:14:31.280      slat (nsec): min=1435, max=105161, avg=4990.46, stdev=3080.00
00:14:31.280      clat (usec): min=1596, max=14602, avg=7583.53, stdev=318.24
00:14:31.280       lat (usec): min=1598, max=14607, avg=7588.52, stdev=318.32
00:14:31.280      clat percentiles (usec):
00:14:31.280       |  1.00th=[ 7242],  5.00th=[ 7308], 10.00th=[ 7373], 20.00th=[ 7439],
00:14:31.280       | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7570],
00:14:31.280       | 70.00th=[ 7635], 80.00th=[ 7701], 90.00th=[ 7832], 95.00th=[ 7963],
00:14:31.280       | 99.00th=[ 8225], 99.50th=[ 8356], 99.90th=[10683], 99.95th=[13042],
00:14:31.280       | 99.99th=[14484]
00:14:31.280     bw (  KiB/s): min=131704, max=138272, per=99.99%, avg=135817.14, stdev=2755.93, samples=7
00:14:31.280     iops        : min=32926, max=34568, avg=33954.29, stdev=688.98, samples=7
00:14:31.280    write: IOPS=34.0k, BW=133MiB/s (139MB/s)(512MiB/3859msec); 0 zone resets
00:14:31.280      slat (nsec): min=1617, max=139869, avg=5644.96, stdev=3241.73
00:14:31.280      clat (usec): min=1530, max=14609, avg=7470.87, stdev=316.32
00:14:31.280       lat (usec): min=1534, max=14615, avg=7476.52, stdev=316.43
00:14:31.280      clat percentiles (usec):
00:14:31.280       |  1.00th=[ 7111],  5.00th=[ 7242], 10.00th=[ 7242], 20.00th=[ 7308],
00:14:31.280       | 30.00th=[ 7373], 40.00th=[ 7373], 50.00th=[ 7439], 60.00th=[ 7504],
00:14:31.280       | 70.00th=[ 7570], 80.00th=[ 7635], 90.00th=[ 7767], 95.00th=[ 7832],
00:14:31.280       | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[10552], 99.95th=[12649],
00:14:31.280       | 99.99th=[14353]
00:14:31.280     bw (  KiB/s): min=132176, max=137624, per=99.82%, avg=135645.71, stdev=2488.22, samples=7
00:14:31.280     iops        : min=33044, max=34406, avg=33911.43, stdev=621.62, samples=7
00:14:31.280    lat (msec)   : 2=0.05%, 4=0.05%, 10=99.78%, 20=0.12%
00:14:31.280    cpu          : usr=15.58%, sys=36.88%, ctx=8202, majf=0, minf=7
00:14:31.280    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:14:31.280       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:31.281       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:31.281       issued rwts: total=131040,131104,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:31.281       latency   : target=0, window=0, percentile=100.00%, depth=512
00:14:31.281  
00:14:31.281  Run status group 0 (all jobs):
00:14:31.281     READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=512MiB (537MB), run=3859-3859msec
00:14:31.281    WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=512MiB (537MB), run=3859-3859msec
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@55 -- # vm_exec 1 'umount /tmp/virtiofs.1'
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'umount /tmp/virtiofs.1'
00:14:31.281  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@58 -- # notice 'Shutting down virtual machine...'
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:14:31.281  INFO: Shutting down virtual machine...
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@59 -- # vm_shutdown_all
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vm_list_all
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # vms=()
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # local vms
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=460842
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 460842
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:14:31.281  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@432 -- # set +e
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:31.281    18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:31.281   18:35:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:31.281  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:14:31.281  INFO: VM1 is shutting down - wait a while to complete
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@435 -- # set -e
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:14:31.281  INFO: Waiting for VMs to shutdown...
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:14:31.281    18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=460842
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 460842
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:14:31.281   18:35:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:14:31.282    18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=460842
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 460842
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:14:31.282   18:35:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:14:31.850   18:35:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:32.793   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:14:32.794  INFO: All VMs successfully shut down
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@505 -- # return 0
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@61 -- # vhost_kill 0
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@202 -- # local rc=0
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@210 -- # local vhost_dir
00:14:32.794    18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:14:32.794    18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:14:32.794    18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:14:32.794    18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@220 -- # local vhost_pid
00:14:32.794    18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # vhost_pid=460379
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 460379) app'
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 460379) app'
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:32.794   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 460379) app'
00:14:32.795  INFO: killing vhost (PID 460379) app
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@224 -- # kill -INT 460379
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:32.795  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 460379
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:14:32.795  .
00:14:32.795   18:35:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:14:33.056  [2024-11-17 18:35:19.430654] vfu_virtio_fs.c: 301:_vfu_virtio_fs_fuse_dispatcher_delete_cpl: *NOTICE*: FUSE dispatcher deleted
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 460379
00:14:33.993  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (460379) - No such process
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@231 -- # break
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@234 -- # kill -0 460379
00:14:33.993  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (460379) - No such process
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@239 -- # kill -0 460379
00:14:33.993  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (460379) - No such process
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@245 -- # is_pid_child 460379
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1668 -- # local pid=460379 _pid
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:14:33.993    18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1667 -- # jobs -pr
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1674 -- # return 1
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@261 -- # return 0
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@63 -- # vhosttestfini
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:14:33.993  
00:14:33.993  real	0m51.848s
00:14:33.993  user	3m22.702s
00:14:33.993  sys	0m2.669s
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:33.993   18:35:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:14:33.993  ************************************
00:14:33.993  END TEST vfio_user_virtio_fs_fio
00:14:33.993  ************************************
00:14:33.993   18:35:20 vfio_user_qemu -- vfio_user/vfio_user.sh@26 -- # vhosttestfini
00:14:33.993   18:35:20 vfio_user_qemu -- vhost/common.sh@54 -- # '[' iso == iso ']'
00:14:33.993   18:35:20 vfio_user_qemu -- vhost/common.sh@55 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:14:34.929  Waiting for block devices as requested
00:14:34.929  0000:00:04.7 (8086 6f27): vfio-pci -> ioatdma
00:14:35.188  0000:00:04.6 (8086 6f26): vfio-pci -> ioatdma
00:14:35.188  0000:00:04.5 (8086 6f25): vfio-pci -> ioatdma
00:14:35.188  0000:00:04.4 (8086 6f24): vfio-pci -> ioatdma
00:14:35.446  0000:00:04.3 (8086 6f23): vfio-pci -> ioatdma
00:14:35.446  0000:00:04.2 (8086 6f22): vfio-pci -> ioatdma
00:14:35.446  0000:00:04.1 (8086 6f21): vfio-pci -> ioatdma
00:14:35.446  0000:00:04.0 (8086 6f20): vfio-pci -> ioatdma
00:14:35.704  0000:80:04.7 (8086 6f27): vfio-pci -> ioatdma
00:14:35.705  0000:80:04.6 (8086 6f26): vfio-pci -> ioatdma
00:14:35.705  0000:80:04.5 (8086 6f25): vfio-pci -> ioatdma
00:14:35.705  0000:80:04.4 (8086 6f24): vfio-pci -> ioatdma
00:14:35.964  0000:80:04.3 (8086 6f23): vfio-pci -> ioatdma
00:14:35.964  0000:80:04.2 (8086 6f22): vfio-pci -> ioatdma
00:14:35.964  0000:80:04.1 (8086 6f21): vfio-pci -> ioatdma
00:14:36.223  0000:80:04.0 (8086 6f20): vfio-pci -> ioatdma
00:14:36.223  0000:0d:00.0 (8086 0a54): vfio-pci -> nvme
00:14:36.482  
00:14:36.482  real	6m47.061s
00:14:36.482  user	28m35.496s
00:14:36.482  sys	0m14.019s
00:14:36.482   18:35:22 vfio_user_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:36.482   18:35:22 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:36.482  ************************************
00:14:36.482  END TEST vfio_user_qemu
00:14:36.482  ************************************
00:14:36.482   18:35:22  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:14:36.482   18:35:22  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:14:36.482   18:35:22  -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]]
00:14:36.482   18:35:22  -- spdk/autotest.sh@371 -- # run_test sma /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:14:36.482   18:35:22  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:36.482   18:35:22  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:36.482   18:35:22  -- common/autotest_common.sh@10 -- # set +x
00:14:36.482  ************************************
00:14:36.482  START TEST sma
00:14:36.482  ************************************
00:14:36.482   18:35:22 sma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:14:36.482  * Looking for test storage...
00:14:36.482  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:36.482    18:35:22 sma -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:36.482     18:35:22 sma -- common/autotest_common.sh@1693 -- # lcov --version
00:14:36.482     18:35:22 sma -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:36.482    18:35:22 sma -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:36.482    18:35:22 sma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:36.482    18:35:22 sma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:36.482    18:35:22 sma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:36.482    18:35:22 sma -- scripts/common.sh@336 -- # IFS=.-:
00:14:36.482    18:35:22 sma -- scripts/common.sh@336 -- # read -ra ver1
00:14:36.482    18:35:22 sma -- scripts/common.sh@337 -- # IFS=.-:
00:14:36.482    18:35:22 sma -- scripts/common.sh@337 -- # read -ra ver2
00:14:36.482    18:35:22 sma -- scripts/common.sh@338 -- # local 'op=<'
00:14:36.482    18:35:22 sma -- scripts/common.sh@340 -- # ver1_l=2
00:14:36.482    18:35:22 sma -- scripts/common.sh@341 -- # ver2_l=1
00:14:36.482    18:35:22 sma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:36.482    18:35:22 sma -- scripts/common.sh@344 -- # case "$op" in
00:14:36.482    18:35:22 sma -- scripts/common.sh@345 -- # : 1
00:14:36.482    18:35:22 sma -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:36.482    18:35:22 sma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:36.482     18:35:22 sma -- scripts/common.sh@365 -- # decimal 1
00:14:36.482     18:35:22 sma -- scripts/common.sh@353 -- # local d=1
00:14:36.482     18:35:22 sma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:36.482     18:35:22 sma -- scripts/common.sh@355 -- # echo 1
00:14:36.482    18:35:22 sma -- scripts/common.sh@365 -- # ver1[v]=1
00:14:36.482     18:35:22 sma -- scripts/common.sh@366 -- # decimal 2
00:14:36.482     18:35:22 sma -- scripts/common.sh@353 -- # local d=2
00:14:36.482     18:35:22 sma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:36.482     18:35:22 sma -- scripts/common.sh@355 -- # echo 2
00:14:36.482    18:35:22 sma -- scripts/common.sh@366 -- # ver2[v]=2
00:14:36.482    18:35:22 sma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:36.482    18:35:22 sma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:36.482    18:35:22 sma -- scripts/common.sh@368 -- # return 0
00:14:36.482    18:35:22 sma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:36.482    18:35:22 sma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:36.482  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.482  		--rc genhtml_branch_coverage=1
00:14:36.482  		--rc genhtml_function_coverage=1
00:14:36.482  		--rc genhtml_legend=1
00:14:36.482  		--rc geninfo_all_blocks=1
00:14:36.482  		--rc geninfo_unexecuted_blocks=1
00:14:36.482  		
00:14:36.482  		'
00:14:36.482    18:35:22 sma -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:36.482  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.482  		--rc genhtml_branch_coverage=1
00:14:36.482  		--rc genhtml_function_coverage=1
00:14:36.482  		--rc genhtml_legend=1
00:14:36.482  		--rc geninfo_all_blocks=1
00:14:36.482  		--rc geninfo_unexecuted_blocks=1
00:14:36.482  		
00:14:36.482  		'
00:14:36.482    18:35:22 sma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:36.482  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.482  		--rc genhtml_branch_coverage=1
00:14:36.482  		--rc genhtml_function_coverage=1
00:14:36.482  		--rc genhtml_legend=1
00:14:36.482  		--rc geninfo_all_blocks=1
00:14:36.482  		--rc geninfo_unexecuted_blocks=1
00:14:36.482  		
00:14:36.482  		'
00:14:36.482    18:35:22 sma -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:36.482  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.482  		--rc genhtml_branch_coverage=1
00:14:36.482  		--rc genhtml_function_coverage=1
00:14:36.482  		--rc genhtml_legend=1
00:14:36.482  		--rc geninfo_all_blocks=1
00:14:36.482  		--rc geninfo_unexecuted_blocks=1
00:14:36.482  		
00:14:36.482  		'
00:14:36.482   18:35:22 sma -- sma/sma.sh@11 -- # run_test sma_nvmf_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:14:36.482   18:35:22 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:36.482   18:35:22 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:36.482   18:35:22 sma -- common/autotest_common.sh@10 -- # set +x
00:14:36.482  ************************************
00:14:36.482  START TEST sma_nvmf_tcp
00:14:36.482  ************************************
00:14:36.482   18:35:22 sma.sma_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:14:36.482  * Looking for test storage...
00:14:36.482  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:36.482    18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:36.482     18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:36.482     18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:36.742     18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:36.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.742  		--rc genhtml_branch_coverage=1
00:14:36.742  		--rc genhtml_function_coverage=1
00:14:36.742  		--rc genhtml_legend=1
00:14:36.742  		--rc geninfo_all_blocks=1
00:14:36.742  		--rc geninfo_unexecuted_blocks=1
00:14:36.742  		
00:14:36.742  		'
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:36.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.742  		--rc genhtml_branch_coverage=1
00:14:36.742  		--rc genhtml_function_coverage=1
00:14:36.742  		--rc genhtml_legend=1
00:14:36.742  		--rc geninfo_all_blocks=1
00:14:36.742  		--rc geninfo_unexecuted_blocks=1
00:14:36.742  		
00:14:36.742  		'
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:36.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.742  		--rc genhtml_branch_coverage=1
00:14:36.742  		--rc genhtml_function_coverage=1
00:14:36.742  		--rc genhtml_legend=1
00:14:36.742  		--rc geninfo_all_blocks=1
00:14:36.742  		--rc geninfo_unexecuted_blocks=1
00:14:36.742  		
00:14:36.742  		'
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:36.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:36.742  		--rc genhtml_branch_coverage=1
00:14:36.742  		--rc genhtml_function_coverage=1
00:14:36.742  		--rc genhtml_legend=1
00:14:36.742  		--rc geninfo_all_blocks=1
00:14:36.742  		--rc geninfo_unexecuted_blocks=1
00:14:36.742  		
00:14:36.742  		'
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@70 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@72 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@73 -- # tgtpid=470852
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@83 -- # smapid=470854
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@86 -- # sma_waitforlisten
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/common.sh@8 -- # local sma_port=8080
00:14:36.742    18:35:23 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # cat
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i = 0 ))
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:36.742   18:35:23 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:14:36.742  [2024-11-17 18:35:23.200356] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:14:36.742  [2024-11-17 18:35:23.200475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470852 ]
00:14:36.742  EAL: No free 2048 kB hugepages reported on node 1
00:14:37.001  [2024-11-17 18:35:23.328129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:37.001  [2024-11-17 18:35:23.365381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:37.570   18:35:24 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:14:37.570   18:35:24 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:14:37.570   18:35:24 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:37.828   18:35:24 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:14:37.828  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:37.828  I0000 00:00:1731864924.336338  470854 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:37.828  [2024-11-17 18:35:24.349434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- sma/common.sh@12 -- # return 0
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@89 -- # rpc_cmd bdev_null_create null0 100 4096
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:38.764  null0
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@92 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:14:38.764   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.765   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:38.765  [
00:14:38.765  {
00:14:38.765  "trtype": "TCP",
00:14:38.765  "max_queue_depth": 128,
00:14:38.765  "max_io_qpairs_per_ctrlr": 127,
00:14:38.765  "in_capsule_data_size": 4096,
00:14:38.765  "max_io_size": 131072,
00:14:38.765  "io_unit_size": 131072,
00:14:38.765  "max_aq_depth": 128,
00:14:38.765  "num_shared_buffers": 511,
00:14:38.765  "buf_cache_size": 4294967295,
00:14:38.765  "dif_insert_or_strip": false,
00:14:38.765  "zcopy": false,
00:14:38.765  "c2h_success": true,
00:14:38.765  "sock_priority": 0,
00:14:38.765  "abort_timeout_sec": 1,
00:14:38.765  "ack_timeout": 0,
00:14:38.765  "data_wr_pool_size": 0
00:14:38.765  }
00:14:38.765  ]
00:14:38.765   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.765    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # create_device nqn.2016-06.io.spdk:cnode0
00:14:38.765    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # jq -r .handle
00:14:38.765    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:39.023  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:39.023  I0000 00:00:1731864925.424181  471253 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:39.023  I0000 00:00:1731864925.425958  471253 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:39.023  I0000 00:00:1731864925.427251  471260 subchannel.cc:806] subchannel 0x55e62bea2280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e62be24880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e62bfe3cf0, grpc.internal.client_channel_call_destination=0x7fd93a984390, grpc.internal.event_engine=0x55e62bcb3e40, grpc.internal.security_connector=0x55e62be0aaa0, grpc.internal.subchannel_pool=0x55e62c0154f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e62c018890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:25.426776195+01:00"}), backing off for 1000 ms
00:14:39.023  [2024-11-17 18:35:25.448844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:14:39.023   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:39.023   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@96 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:39.023   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.023   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:39.023  [
00:14:39.023  {
00:14:39.023  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:14:39.023  "subtype": "NVMe",
00:14:39.023  "listen_addresses": [
00:14:39.023  {
00:14:39.023  "trtype": "TCP",
00:14:39.023  "adrfam": "IPv4",
00:14:39.024  "traddr": "127.0.0.1",
00:14:39.024  "trsvcid": "4420"
00:14:39.024  }
00:14:39.024  ],
00:14:39.024  "allow_any_host": false,
00:14:39.024  "hosts": [],
00:14:39.024  "serial_number": "00000000000000000000",
00:14:39.024  "model_number": "SPDK bdev Controller",
00:14:39.024  "max_namespaces": 32,
00:14:39.024  "min_cntlid": 1,
00:14:39.024  "max_cntlid": 65519,
00:14:39.024  "namespaces": []
00:14:39.024  }
00:14:39.024  ]
00:14:39.024   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.024    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # create_device nqn.2016-06.io.spdk:cnode1
00:14:39.024    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:39.024    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # jq -r .handle
00:14:39.282  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:39.282  I0000 00:00:1731864925.691260  471293 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:39.282  I0000 00:00:1731864925.692938  471293 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:39.282  I0000 00:00:1731864925.694333  471479 subchannel.cc:806] subchannel 0x55c3d0b1f280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c3d0aa1880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c3d0c60cf0, grpc.internal.client_channel_call_destination=0x7f22f72a0390, grpc.internal.event_engine=0x55c3d0930e40, grpc.internal.security_connector=0x55c3d0a87aa0, grpc.internal.subchannel_pool=0x55c3d0c924f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c3d0c95890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:25.693682403+01:00"}), backing off for 1000 ms
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@99 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:39.282  [
00:14:39.282  {
00:14:39.282  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:14:39.282  "subtype": "NVMe",
00:14:39.282  "listen_addresses": [
00:14:39.282  {
00:14:39.282  "trtype": "TCP",
00:14:39.282  "adrfam": "IPv4",
00:14:39.282  "traddr": "127.0.0.1",
00:14:39.282  "trsvcid": "4420"
00:14:39.282  }
00:14:39.282  ],
00:14:39.282  "allow_any_host": false,
00:14:39.282  "hosts": [],
00:14:39.282  "serial_number": "00000000000000000000",
00:14:39.282  "model_number": "SPDK bdev Controller",
00:14:39.282  "max_namespaces": 32,
00:14:39.282  "min_cntlid": 1,
00:14:39.282  "max_cntlid": 65519,
00:14:39.282  "namespaces": []
00:14:39.282  }
00:14:39.282  ]
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@100 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.282   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:39.282  [
00:14:39.282  {
00:14:39.282  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:14:39.282  "subtype": "NVMe",
00:14:39.282  "listen_addresses": [
00:14:39.282  {
00:14:39.282  "trtype": "TCP",
00:14:39.282  "adrfam": "IPv4",
00:14:39.282  "traddr": "127.0.0.1",
00:14:39.282  "trsvcid": "4420"
00:14:39.282  }
00:14:39.282  ],
00:14:39.282  "allow_any_host": false,
00:14:39.282  "hosts": [],
00:14:39.282  "serial_number": "00000000000000000000",
00:14:39.282  "model_number": "SPDK bdev Controller",
00:14:39.282  "max_namespaces": 32,
00:14:39.283  "min_cntlid": 1,
00:14:39.283  "max_cntlid": 65519,
00:14:39.283  "namespaces": []
00:14:39.283  }
00:14:39.283  ]
00:14:39.283   18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.283   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@101 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 != \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # rpc_cmd nvmf_get_subsystems
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # jq -r '. | length'
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.283   18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # [[ 3 -eq 3 ]]
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # create_device nqn.2016-06.io.spdk:cnode0
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:39.283    18:35:25 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # jq -r .handle
00:14:39.541  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:39.541  I0000 00:00:1731864926.035467  471506 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:39.541  I0000 00:00:1731864926.037653  471506 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:39.541  I0000 00:00:1731864926.039361  471511 subchannel.cc:806] subchannel 0x55f07fa1c280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f07f99e880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f07fb5dcf0, grpc.internal.client_channel_call_destination=0x7efd394c4390, grpc.internal.event_engine=0x55f07f82de40, grpc.internal.security_connector=0x55f07f984aa0, grpc.internal.subchannel_pool=0x55f07fb8f4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f07fb92890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:26.038887114+01:00"}), backing off for 999 ms
00:14:39.541   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # tmp0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:39.541    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # create_device nqn.2016-06.io.spdk:cnode1
00:14:39.541    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:39.541    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # jq -r .handle
00:14:39.800  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:39.800  I0000 00:00:1731864926.269279  471534 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:39.800  I0000 00:00:1731864926.271147  471534 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:39.800  I0000 00:00:1731864926.272596  471537 subchannel.cc:806] subchannel 0x55e0f097e280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e0f0900880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e0f0abfcf0, grpc.internal.client_channel_call_destination=0x7fef031d1390, grpc.internal.event_engine=0x55e0f078fe40, grpc.internal.security_connector=0x55e0f08e6aa0, grpc.internal.subchannel_pool=0x55e0f0af14f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e0f0af4890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:26.272061192+01:00"}), backing off for 999 ms
00:14:39.800   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # tmp1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:39.800    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # jq -r '. | length'
00:14:39.800    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # rpc_cmd nvmf_get_subsystems
00:14:39.800    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.800    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:39.800    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.800   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # [[ 3 -eq 3 ]]
00:14:39.800   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@112 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:14:39.800   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@113 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode1 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:14:39.800   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@116 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:39.800   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.059  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.059  I0000 00:00:1731864926.527666  471560 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.059  I0000 00:00:1731864926.529461  471560 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.059  I0000 00:00:1731864926.530912  471568 subchannel.cc:806] subchannel 0x56488c21f280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56488c1a1880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56488c360cf0, grpc.internal.client_channel_call_destination=0x7f83eddf4390, grpc.internal.event_engine=0x56488be1b7d0, grpc.internal.security_connector=0x56488c187aa0, grpc.internal.subchannel_pool=0x56488c3924f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56488c395890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:26.530339248+01:00"}), backing off for 1000 ms
00:14:40.059  {}
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@117 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:40.059    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.059  [2024-11-17 18:35:26.571578] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0' does not exist
00:14:40.059  request:
00:14:40.059  {
00:14:40.059  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:14:40.059  "method": "nvmf_get_subsystems",
00:14:40.059  "req_id": 1
00:14:40.059  }
00:14:40.059  Got JSON-RPC error response
00:14:40.059  response:
00:14:40.059  {
00:14:40.059  "code": -19,
00:14:40.059  "message": "No such device"
00:14:40.059  }
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:40.059    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # rpc_cmd nvmf_get_subsystems
00:14:40.059    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.059    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.059    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # jq -r '. | length'
00:14:40.059    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # [[ 2 -eq 2 ]]
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@120 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:40.059   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.317  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.317  I0000 00:00:1731864926.830076  471592 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.317  I0000 00:00:1731864926.831635  471592 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.317  I0000 00:00:1731864926.832983  471772 subchannel.cc:806] subchannel 0x56552806e280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x565527ff0880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5655281afcf0, grpc.internal.client_channel_call_destination=0x7f7cc91e3390, grpc.internal.event_engine=0x565527c6a7d0, grpc.internal.security_connector=0x565527fd6aa0, grpc.internal.subchannel_pool=0x5655281e14f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5655281e4890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:26.832504242+01:00"}), backing off for 1000 ms
00:14:40.318  {}
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@121 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:40.318    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.318  [2024-11-17 18:35:26.876437] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode1' does not exist
00:14:40.318  request:
00:14:40.318  {
00:14:40.318  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:14:40.318  "method": "nvmf_get_subsystems",
00:14:40.318  "req_id": 1
00:14:40.318  }
00:14:40.318  Got JSON-RPC error response
00:14:40.318  response:
00:14:40.318  {
00:14:40.318  "code": -19,
00:14:40.318  "message": "No such device"
00:14:40.318  }
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:40.318   18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:40.318    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # jq -r '. | length'
00:14:40.318    18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # rpc_cmd nvmf_get_subsystems
00:14:40.318    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.318    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.576    18:35:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.576   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # [[ 1 -eq 1 ]]
00:14:40.576   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@125 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:40.576   18:35:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.576  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.576  I0000 00:00:1731864927.107552  471812 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.576  I0000 00:00:1731864927.109205  471812 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.576  I0000 00:00:1731864927.110436  471813 subchannel.cc:806] subchannel 0x563623a15280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563623997880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563623b56cf0, grpc.internal.client_channel_call_destination=0x7fa2ba20d390, grpc.internal.event_engine=0x5636236117d0, grpc.internal.security_connector=0x56362397daa0, grpc.internal.subchannel_pool=0x563623b884f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563623b8b890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:27.109996751+01:00"}), backing off for 999 ms
00:14:40.576  {}
00:14:40.576   18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@126 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:40.576   18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.834  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.834  I0000 00:00:1731864927.328671  471833 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.834  I0000 00:00:1731864927.330213  471833 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.834  I0000 00:00:1731864927.331500  471834 subchannel.cc:806] subchannel 0x55ff46724280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ff466a6880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ff46865cf0, grpc.internal.client_channel_call_destination=0x7fdb9579b390, grpc.internal.event_engine=0x55ff463207d0, grpc.internal.security_connector=0x55ff4668caa0, grpc.internal.subchannel_pool=0x55ff468974f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ff4689a890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:27.330972477+01:00"}), backing off for 999 ms
00:14:40.834  {}
00:14:40.834    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # create_device nqn.2016-06.io.spdk:cnode0
00:14:40.834    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.834    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # jq -r .handle
00:14:41.093  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.093  I0000 00:00:1731864927.550417  471860 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.093  I0000 00:00:1731864927.552024  471860 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.093  I0000 00:00:1731864927.553266  471868 subchannel.cc:806] subchannel 0x55a72e315280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a72e297880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a72e456cf0, grpc.internal.client_channel_call_destination=0x7fd556282390, grpc.internal.event_engine=0x55a72e126e40, grpc.internal.security_connector=0x55a72e27daa0, grpc.internal.subchannel_pool=0x55a72e4884f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a72e48b890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:27.552793172+01:00"}), backing off for 1000 ms
00:14:41.093  [2024-11-17 18:35:27.574681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:14:41.093   18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:41.093    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # create_device nqn.2016-06.io.spdk:cnode1
00:14:41.093    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.093    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # jq -r .handle
00:14:41.351  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.351  I0000 00:00:1731864927.792743  471891 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.351  I0000 00:00:1731864927.794402  471891 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.351  I0000 00:00:1731864927.795779  471894 subchannel.cc:806] subchannel 0x55db4e049280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55db4dfcb880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55db4e18acf0, grpc.internal.client_channel_call_destination=0x7f0406d07390, grpc.internal.event_engine=0x55db4de5ae40, grpc.internal.security_connector=0x55db4dfb1aa0, grpc.internal.subchannel_pool=0x55db4e1bc4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55db4e1bf890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:27.795325083+01:00"}), backing off for 1000 ms
00:14:41.351   18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:41.351    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # rpc_cmd bdev_get_bdevs -b null0
00:14:41.351    18:35:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.351    18:35:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.351    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # jq -r '.[].uuid'
00:14:41.351    18:35:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.351   18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # uuid=0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:41.351   18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@134 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:41.351   18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.351    18:35:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:41.351    18:35:27 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:41.610  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.610  I0000 00:00:1731864928.135206  471950 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.610  I0000 00:00:1731864928.139978  471950 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.610  I0000 00:00:1731864928.141253  472113 subchannel.cc:806] subchannel 0x5613636a1280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561363623880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5613637e2cf0, grpc.internal.client_channel_call_destination=0x7fadf635f390, grpc.internal.event_engine=0x56136329d7d0, grpc.internal.security_connector=0x5613635ffa50, grpc.internal.subchannel_pool=0x5613638144f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561363817890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:28.140802536+01:00"}), backing off for 1000 ms
00:14:41.610  {}
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # jq -r '.[0].namespaces | length'
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.869   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # [[ 1 -eq 1 ]]
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # jq -r '.[0].namespaces | length'
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.869   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # [[ 0 -eq 0 ]]
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # jq -r '.[0].namespaces[0].uuid'
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.869   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # [[ 0c50225a-72fd-49a9-a4c0-c87aee8eb954 == \0\c\5\0\2\2\5\a\-\7\2\f\d\-\4\9\a\9\-\a\4\c\0\-\c\8\7\a\e\e\8\e\b\9\5\4 ]]
00:14:41.869   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@140 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:41.869   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:41.869    18:35:28 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:42.127  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.127  I0000 00:00:1731864928.532463  472143 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.127  I0000 00:00:1731864928.533977  472143 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.127  I0000 00:00:1731864928.535292  472152 subchannel.cc:806] subchannel 0x558a255b9280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558a2553b880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558a256facf0, grpc.internal.client_channel_call_destination=0x7fcc4ba38390, grpc.internal.event_engine=0x558a251b57d0, grpc.internal.security_connector=0x558a25517a50, grpc.internal.subchannel_pool=0x558a2572c4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558a2572f890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:28.534825182+01:00"}), backing off for 999 ms
00:14:42.127  {}
00:14:42.127    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:42.127    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # jq -r '.[0].namespaces | length'
00:14:42.127    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.127    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.127    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.127   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # [[ 1 -eq 1 ]]
00:14:42.127    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # jq -r '.[0].namespaces | length'
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.128   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # [[ 0 -eq 0 ]]
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # jq -r '.[0].namespaces[0].uuid'
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.128   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # [[ 0c50225a-72fd-49a9-a4c0-c87aee8eb954 == \0\c\5\0\2\2\5\a\-\7\2\f\d\-\4\9\a\9\-\a\4\c\0\-\c\8\7\a\e\e\8\e\b\9\5\4 ]]
00:14:42.128   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@146 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:42.128   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:42.128    18:35:28 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:42.386  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.386  I0000 00:00:1731864928.902160  472181 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.386  I0000 00:00:1731864928.903871  472181 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.386  I0000 00:00:1731864928.905180  472187 subchannel.cc:806] subchannel 0x55b13aeae280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b13ae30880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b13afefcf0, grpc.internal.client_channel_call_destination=0x7f4a9f1f1390, grpc.internal.event_engine=0x55b13acbfe40, grpc.internal.security_connector=0x55b13ae16aa0, grpc.internal.subchannel_pool=0x55b13b0214f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b13b024890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:28.904717096+01:00"}), backing off for 1000 ms
00:14:42.386  {}
00:14:42.386    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # jq -r '.[0].namespaces | length'
00:14:42.386    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:42.386    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.386    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.645    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.645   18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # [[ 0 -eq 0 ]]
00:14:42.645    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:42.645    18:35:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # jq -r '.[0].namespaces | length'
00:14:42.645    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.645    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.645    18:35:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.645   18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # [[ 0 -eq 0 ]]
00:14:42.645   18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@151 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:42.645   18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.645    18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 0c50225a-72fd-49a9-a4c0-c87aee8eb954
00:14:42.645    18:35:29 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:42.904  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.904  I0000 00:00:1731864929.278654  472217 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.904  I0000 00:00:1731864929.280280  472217 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.904  I0000 00:00:1731864929.281811  472414 subchannel.cc:806] subchannel 0x563d05f02280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563d05e84880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563d06043cf0, grpc.internal.client_channel_call_destination=0x7fe9b6971390, grpc.internal.event_engine=0x563d05d13e40, grpc.internal.security_connector=0x563d05e6aaa0, grpc.internal.subchannel_pool=0x563d060754f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563d06078890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:35:29.281167718+01:00"}), backing off for 1000 ms
00:14:42.904  {}
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@153 -- # cleanup
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@13 -- # killprocess 470852
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 470852 ']'
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 470852
00:14:42.904    18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:42.904    18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470852
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470852'
00:14:42.904  killing process with pid 470852
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 470852
00:14:42.904   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 470852
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@14 -- # killprocess 470854
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 470854 ']'
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 470854
00:14:43.471    18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:43.471    18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470854
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=python3
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470854'
00:14:43.471  killing process with pid 470854
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 470854
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 470854
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@154 -- # trap - SIGINT SIGTERM EXIT
00:14:43.471  
00:14:43.471  real	0m6.892s
00:14:43.471  user	0m10.247s
00:14:43.471  sys	0m1.129s
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:43.471   18:35:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:43.471  ************************************
00:14:43.471  END TEST sma_nvmf_tcp
00:14:43.471  ************************************
00:14:43.471   18:35:29 sma -- sma/sma.sh@12 -- # run_test sma_vfiouser_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:43.471   18:35:29 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:43.471   18:35:29 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:43.471   18:35:29 sma -- common/autotest_common.sh@10 -- # set +x
00:14:43.471  ************************************
00:14:43.471  START TEST sma_vfiouser_qemu
00:14:43.471  ************************************
00:14:43.471   18:35:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:43.471  * Looking for test storage...
00:14:43.471  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:43.471    18:35:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:43.471     18:35:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1693 -- # lcov --version
00:14:43.471     18:35:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:43.472    18:35:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:43.472    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:43.472    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:43.472    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:43.472    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@344 -- # case "$op" in
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@345 -- # : 1
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # decimal 1
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=1
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 1
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # decimal 2
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=2
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:43.732     18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 2
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # return 0
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:43.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:43.732  		--rc genhtml_branch_coverage=1
00:14:43.732  		--rc genhtml_function_coverage=1
00:14:43.732  		--rc genhtml_legend=1
00:14:43.732  		--rc geninfo_all_blocks=1
00:14:43.732  		--rc geninfo_unexecuted_blocks=1
00:14:43.732  		
00:14:43.732  		'
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:43.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:43.732  		--rc genhtml_branch_coverage=1
00:14:43.732  		--rc genhtml_function_coverage=1
00:14:43.732  		--rc genhtml_legend=1
00:14:43.732  		--rc geninfo_all_blocks=1
00:14:43.732  		--rc geninfo_unexecuted_blocks=1
00:14:43.732  		
00:14:43.732  		'
00:14:43.732    18:35:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:43.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:43.732  		--rc genhtml_branch_coverage=1
00:14:43.732  		--rc genhtml_function_coverage=1
00:14:43.732  		--rc genhtml_legend=1
00:14:43.732  		--rc geninfo_all_blocks=1
00:14:43.733  		--rc geninfo_unexecuted_blocks=1
00:14:43.733  		
00:14:43.733  		'
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:43.733  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:43.733  		--rc genhtml_branch_coverage=1
00:14:43.733  		--rc genhtml_function_coverage=1
00:14:43.733  		--rc genhtml_legend=1
00:14:43.733  		--rc geninfo_all_blocks=1
00:14:43.733  		--rc geninfo_unexecuted_blocks=1
00:14:43.733  		
00:14:43.733  		'
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vfio_user/common.sh@6 -- # : 128
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vfio_user/common.sh@7 -- # : 512
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@6 -- # : false
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@9 -- # : qemu-img
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:14:43.733       18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:14:43.733     18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:14:43.733      18:35:30 sma.sma_vfiouser_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:14:43.733       18:35:30 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:14:43.733        18:35:30 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:14:43.733        18:35:30 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:14:43.733        18:35:30 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:14:43.733        18:35:30 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:14:43.733       18:35:30 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@104 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@107 -- # VM_PASSWORD=root
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@108 -- # vm_no=0
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@110 -- # VFO_ROOT_PATH=/tmp/sma/vfio-user/qemu
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@112 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@113 -- # mkdir -p /tmp/sma/vfio-user/qemu
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@116 -- # used_vms=0
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@117 -- # vm_kill_all
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:43.733    18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 1
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@446 -- # return 0
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@119 -- # vm_setup --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disk-type=virtio --force=0 '--qemu-args=-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:14:43.733   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@518 -- # xtrace_disable
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:43.734  INFO: Creating new VM in /root/vhost_test/vms/0
00:14:43.734  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:14:43.734  INFO: TASK MASK: 1-2
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@671 -- # local node_num=0
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@672 -- # local boot_disk_present=false
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:14:43.734  INFO: NUMA NODE: 0
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@677 -- # [[ -n '' ]]
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@686 -- # [[ -z '' ]]
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # IFS=,
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # read -r disk disk_type _
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # [[ -z '' ]]
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # disk_type=virtio
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@704 -- # case $disk_type in
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:43.734  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:14:43.734   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:14:44.303  1024+0 records in
00:14:44.303  1024+0 records out
00:14:44.303  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.478366 s, 2.2 GB/s
00:14:44.303   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:14:44.303   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@780 -- # [[ -n '' ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # (( 1 ))
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:14:44.304  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # cat
00:14:44.304    18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@827 -- # echo 10000
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@828 -- # echo 10001
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@829 -- # echo 10002
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@832 -- # [[ -z '' ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@834 -- # echo 10004
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@835 -- # echo 100
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@837 -- # [[ -z '' ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@838 -- # [[ -z '' ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@124 -- # vm_run 0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@843 -- # local run_all=false
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@844 -- # local vms_to_run=
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@846 -- # getopts a-: optchar
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@856 -- # false
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@859 -- # shift 0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@860 -- # for vm in "$@"
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@871 -- # vm_is_running 0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@373 -- # return 1
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:14:44.304  INFO: running /root/vhost_test/vms/0/run.sh
00:14:44.304   18:35:30 sma.sma_vfiouser_qemu -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:14:44.304  Running VM in /root/vhost_test/vms/0
00:14:44.564  Waiting for QEMU pid file
00:14:45.501  === qemu.log ===
00:14:45.501  === qemu.log ===
00:14:45.501   18:35:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@125 -- # vm_wait_for_boot 300 0
00:14:45.501   18:35:31 sma.sma_vfiouser_qemu -- vhost/common.sh@913 -- # assert_number 300
00:14:45.501   18:35:31 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:14:45.501   18:35:31 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # return 0
00:14:45.501   18:35:31 sma.sma_vfiouser_qemu -- vhost/common.sh@915 -- # xtrace_disable
00:14:45.501   18:35:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:45.501  INFO: Waiting for VMs to boot
00:14:45.501  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:15:07.490  
00:15:07.490  INFO: VM0 ready
00:15:07.490  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:07.490  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:07.490  INFO: all VMs ready
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- vhost/common.sh@973 -- # return 0
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@128 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@129 -- # tgtpid=476727
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@130 -- # waitforlisten 476727
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@835 -- # '[' -z 476727 ']'
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:07.490  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:07.490   18:35:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:07.490  [2024-11-17 18:35:53.813774] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:15:07.490  [2024-11-17 18:35:53.813915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476727 ]
00:15:07.490  EAL: No free 2048 kB hugepages reported on node 1
00:15:07.490  [2024-11-17 18:35:53.945436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:07.490  [2024-11-17 18:35:53.988735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:08.145   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:08.145   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@868 -- # return 0
00:15:08.145   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@133 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:15:08.145   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@134 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:08.146  [2024-11-17 18:35:54.659175] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@135 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:08.146  [2024-11-17 18:35:54.667236] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@136 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:08.146  [2024-11-17 18:35:54.675233] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@137 -- # rpc_cmd framework_start_init
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.146   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:08.450  [2024-11-17 18:35:54.753877] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@140 -- # rpc_cmd bdev_null_create null0 100 4096
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:08.450  null0
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@141 -- # rpc_cmd bdev_null_create null1 100 4096
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:08.450  null1
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@160 -- # smapid=476952
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@163 -- # sma_waitforlisten
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:08.450    18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # cat
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/common.sh@8 -- # local sma_port=8080
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i = 0 ))
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:15:08.450   18:35:54 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:08.759   18:35:55 sma.sma_vfiouser_qemu -- sma/common.sh@14 -- # sleep 1s
00:15:08.759  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:08.759  I0000 00:00:1731864955.220516  476952 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:09.694   18:35:56 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i++ ))
00:15:09.694   18:35:56 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:15:09.694   18:35:56 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:09.694   18:35:56 sma.sma_vfiouser_qemu -- sma/common.sh@12 -- # return 0
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@166 -- # rpc_cmd nvmf_get_transports --trtype VFIOUSER
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:09.695  [
00:15:09.695  {
00:15:09.695  "trtype": "VFIOUSER",
00:15:09.695  "max_queue_depth": 256,
00:15:09.695  "max_io_qpairs_per_ctrlr": 127,
00:15:09.695  "in_capsule_data_size": 0,
00:15:09.695  "max_io_size": 131072,
00:15:09.695  "io_unit_size": 131072,
00:15:09.695  "max_aq_depth": 32,
00:15:09.695  "num_shared_buffers": 0,
00:15:09.695  "buf_cache_size": 0,
00:15:09.695  "dif_insert_or_strip": false,
00:15:09.695  "zcopy": false,
00:15:09.695  "abort_timeout_sec": 0,
00:15:09.695  "ack_timeout": 0,
00:15:09.695  "data_wr_pool_size": 0
00:15:09.695  }
00:15:09.695  ]
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@169 -- # vm_exec 0 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:09.695    18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:09.695    18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:09.695    18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:09.695    18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:09.695    18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:09.695    18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:09.695   18:35:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:15:09.695  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:09.953    18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # create_device 0 0
00:15:09.953    18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:09.953    18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # jq -r .handle
00:15:09.953    18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:09.953    18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:09.953  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:09.953  I0000 00:00:1731864956.501262  477205 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:09.953  I0000 00:00:1731864956.503006  477205 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:09.953  [2024-11-17 18:35:56.509768] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:10.212   18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:10.212   18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@173 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:10.212   18:35:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:10.212   18:35:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:10.212  [
00:15:10.212  {
00:15:10.212  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:10.212  "subtype": "NVMe",
00:15:10.212  "listen_addresses": [
00:15:10.212  {
00:15:10.212  "trtype": "VFIOUSER",
00:15:10.212  "adrfam": "IPv4",
00:15:10.212  "traddr": "/var/tmp/vfiouser-0",
00:15:10.212  "trsvcid": ""
00:15:10.212  }
00:15:10.212  ],
00:15:10.212  "allow_any_host": true,
00:15:10.212  "hosts": [],
00:15:10.212  "serial_number": "00000000000000000000",
00:15:10.212  "model_number": "SPDK bdev Controller",
00:15:10.212  "max_namespaces": 32,
00:15:10.212  "min_cntlid": 1,
00:15:10.213  "max_cntlid": 65519,
00:15:10.213  "namespaces": []
00:15:10.213  }
00:15:10.213  ]
00:15:10.213   18:35:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:10.213   18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@174 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-0
00:15:10.213   18:35:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:10.471  [2024-11-17 18:35:56.870936] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:11.408     18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:11.408     18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:11.408     18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:11.408     18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:11.408     18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:11.408     18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:11.408  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:11.408   18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:15:11.408   18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # rpc_cmd nvmf_get_subsystems
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # jq -r '. | length'
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.408   18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # [[ 2 -eq 2 ]]
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # create_device 1 0
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # jq -r .handle
00:15:11.408    18:35:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:11.668  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:11.668  I0000 00:00:1731864958.170072  477643 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:11.668  I0000 00:00:1731864958.171870  477643 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:11.668  [2024-11-17 18:35:58.178512] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@180 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.927  [
00:15:11.927  {
00:15:11.927  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:11.927  "subtype": "NVMe",
00:15:11.927  "listen_addresses": [
00:15:11.927  {
00:15:11.927  "trtype": "VFIOUSER",
00:15:11.927  "adrfam": "IPv4",
00:15:11.927  "traddr": "/var/tmp/vfiouser-0",
00:15:11.927  "trsvcid": ""
00:15:11.927  }
00:15:11.927  ],
00:15:11.927  "allow_any_host": true,
00:15:11.927  "hosts": [],
00:15:11.927  "serial_number": "00000000000000000000",
00:15:11.927  "model_number": "SPDK bdev Controller",
00:15:11.927  "max_namespaces": 32,
00:15:11.927  "min_cntlid": 1,
00:15:11.927  "max_cntlid": 65519,
00:15:11.927  "namespaces": []
00:15:11.927  }
00:15:11.927  ]
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@181 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.927  [
00:15:11.927  {
00:15:11.927  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:15:11.927  "subtype": "NVMe",
00:15:11.927  "listen_addresses": [
00:15:11.927  {
00:15:11.927  "trtype": "VFIOUSER",
00:15:11.927  "adrfam": "IPv4",
00:15:11.927  "traddr": "/var/tmp/vfiouser-1",
00:15:11.927  "trsvcid": ""
00:15:11.927  }
00:15:11.927  ],
00:15:11.927  "allow_any_host": true,
00:15:11.927  "hosts": [],
00:15:11.927  "serial_number": "00000000000000000000",
00:15:11.927  "model_number": "SPDK bdev Controller",
00:15:11.927  "max_namespaces": 32,
00:15:11.927  "min_cntlid": 1,
00:15:11.927  "max_cntlid": 65519,
00:15:11.927  "namespaces": []
00:15:11.927  }
00:15:11.927  ]
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@182 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 != \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@183 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-1
00:15:11.927   18:35:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:11.927  [2024-11-17 18:35:58.488468] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:15:12.861    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:12.861    18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:12.861    18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:12.861    18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:12.861    18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:12.861    18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:12.861     18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:12.861     18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:12.861     18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:12.861     18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:12.861     18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:12.861     18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:12.861    18:35:59 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:12.861  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:13.119   18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme1/subsysnqn
00:15:13.119   18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme1/subsysnqn ]]
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # rpc_cmd nvmf_get_subsystems
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # jq -r '. | length'
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:13.119   18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # [[ 3 -eq 3 ]]
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # create_device 0 0
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # jq -r .handle
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:13.119    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:13.377  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:13.377  I0000 00:00:1731864959.850126  477884 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:13.377  I0000 00:00:1731864959.852052  477884 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:13.377   18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # tmp0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:13.377    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # create_device 1 0
00:15:13.377    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:15:13.377    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # jq -r .handle
00:15:13.377    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:13.377    18:35:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:13.636  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:13.636  I0000 00:00:1731864960.138227  477935 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:13.636  I0000 00:00:1731864960.140090  477935 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:13.636   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # tmp1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # vm_count_nvme 0
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:13.636     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:13.636     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:13.636     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:13.636     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:13.636     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:13.636     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:13.636    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:13.894  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:13.894   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # [[ 2 -eq 2 ]]
00:15:13.894    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # jq -r '. | length'
00:15:13.894    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # rpc_cmd nvmf_get_subsystems
00:15:13.894    18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:13.894    18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:13.894    18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:13.894   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # [[ 3 -eq 3 ]]
00:15:13.894   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@196 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\0 ]]
00:15:13.894   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@197 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-1 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:15:13.894   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@200 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:13.894   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:14.153  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:14.153  I0000 00:00:1731864960.679035  478179 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:14.153  I0000 00:00:1731864960.681013  478179 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:14.153  {}
00:15:14.153   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@201 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:14.153   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:14.153   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:14.411    18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:14.411  [2024-11-17 18:36:00.734537] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:14.411  request:
00:15:14.411  {
00:15:14.411  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:14.411  "method": "nvmf_get_subsystems",
00:15:14.411  "req_id": 1
00:15:14.411  }
00:15:14.411  Got JSON-RPC error response
00:15:14.411  response:
00:15:14.411  {
00:15:14.411  "code": -19,
00:15:14.411  "message": "No such device"
00:15:14.411  }
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:14.411   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:14.412   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:14.412   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@202 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:14.412   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.412   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:14.412  [
00:15:14.412  {
00:15:14.412  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:15:14.412  "subtype": "NVMe",
00:15:14.412  "listen_addresses": [
00:15:14.412  {
00:15:14.412  "trtype": "VFIOUSER",
00:15:14.412  "adrfam": "IPv4",
00:15:14.412  "traddr": "/var/tmp/vfiouser-1",
00:15:14.412  "trsvcid": ""
00:15:14.412  }
00:15:14.412  ],
00:15:14.412  "allow_any_host": true,
00:15:14.412  "hosts": [],
00:15:14.412  "serial_number": "00000000000000000000",
00:15:14.412  "model_number": "SPDK bdev Controller",
00:15:14.412  "max_namespaces": 32,
00:15:14.412  "min_cntlid": 1,
00:15:14.412  "max_cntlid": 65519,
00:15:14.412  "namespaces": []
00:15:14.412  }
00:15:14.412  ]
00:15:14.412   18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # rpc_cmd nvmf_get_subsystems
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # jq -r '. | length'
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:14.412   18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # [[ 2 -eq 2 ]]
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # vm_count_nvme 0
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:14.412     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:14.412     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:14.412     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:14.412     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:14.412     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:14.412     18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:14.412    18:36:00 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:14.412  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:14.670   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # [[ 1 -eq 1 ]]
00:15:14.670   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@206 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:14.670   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:14.670  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:14.670  I0000 00:00:1731864961.241966  478260 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:14.670  I0000 00:00:1731864961.243950  478260 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:14.929  {}
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@207 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:14.929  [2024-11-17 18:36:01.292270] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:14.929  request:
00:15:14.929  {
00:15:14.929  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:14.929  "method": "nvmf_get_subsystems",
00:15:14.929  "req_id": 1
00:15:14.929  }
00:15:14.929  Got JSON-RPC error response
00:15:14.929  response:
00:15:14.929  {
00:15:14.929  "code": -19,
00:15:14.929  "message": "No such device"
00:15:14.929  }
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@208 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:14.929  [2024-11-17 18:36:01.308323] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:14.929  request:
00:15:14.929  {
00:15:14.929  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:15:14.929  "method": "nvmf_get_subsystems",
00:15:14.929  "req_id": 1
00:15:14.929  }
00:15:14.929  Got JSON-RPC error response
00:15:14.929  response:
00:15:14.929  {
00:15:14.929  "code": -19,
00:15:14.929  "message": "No such device"
00:15:14.929  }
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # jq -r '. | length'
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # rpc_cmd nvmf_get_subsystems
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:14.929   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # [[ 1 -eq 1 ]]
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # vm_count_nvme 0
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:14.929     18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:14.929     18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:14.929     18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:14.929     18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:14.929     18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:14.929     18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:14.929    18:36:01 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:14.929  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:15.189   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # [[ 0 -eq 0 ]]
00:15:15.189   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@213 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:15.189   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:15.447  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:15.447  I0000 00:00:1731864961.804638  478517 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:15.447  I0000 00:00:1731864961.806302  478517 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:15.447  [2024-11-17 18:36:01.809735] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:15.447  {}
00:15:15.447   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@214 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:15.447   18:36:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:15.707  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:15.707  I0000 00:00:1731864962.041203  478554 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:15.707  I0000 00:00:1731864962.043007  478554 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:15.707  [2024-11-17 18:36:02.046422] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:15.707  {}
00:15:15.707    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # create_device 0 0
00:15:15.707    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:15.707    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:15.707    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:15.707    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # jq -r .handle
00:15:15.966  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:15.966  I0000 00:00:1731864962.287296  478579 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:15.966  I0000 00:00:1731864962.289024  478579 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:15.966  [2024-11-17 18:36:02.295155] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:15.966   18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:15.966    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # create_device 1 0
00:15:15.966    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # jq -r .handle
00:15:15.966    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:15:15.966    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:15.966    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:16.225  [2024-11-17 18:36:02.598967] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:16.225  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:16.225  I0000 00:00:1731864962.661439  478797 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:16.225  I0000 00:00:1731864962.663054  478797 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:16.225  [2024-11-17 18:36:02.668253] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:16.484   18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # jq -r '.[].uuid'
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:16.484   18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # uuid0=3b7e050b-3932-4521-baca-34878047379c
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # rpc_cmd bdev_get_bdevs -b null1
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # jq -r '.[].uuid'
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:16.484   18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # uuid1=333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:16.484   18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@223 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:16.484   18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:16.484    18:36:02 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:16.484  [2024-11-17 18:36:02.971126] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:15:16.743  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:16.743  I0000 00:00:1731864963.155367  478836 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:16.743  I0000 00:00:1731864963.157205  478836 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:16.743  {}
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # jq -r '.[0].namespaces | length'
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:16.743   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # [[ 1 -eq 1 ]]
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # jq -r '.[0].namespaces | length'
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:16.743   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # [[ 0 -eq 0 ]]
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # jq -r '.[0].namespaces[0].uuid'
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:16.743    18:36:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.003   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # [[ 3b7e050b-3932-4521-baca-34878047379c == \3\b\7\e\0\5\0\b\-\3\9\3\2\-\4\5\2\1\-\b\a\c\a\-\3\4\8\7\8\0\4\7\3\7\9\c ]]
00:15:17.003   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@227 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:17.003   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:17.003   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:17.003   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=3b7e050b-3932-4521-baca-34878047379c
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:17.003  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:17.003   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:17.003   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:17.003     18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:17.003    18:36:03 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:17.262  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:17.262   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:15:17.262   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:15:17.262   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@229 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:17.262   18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:17.262    18:36:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:17.262    18:36:03 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:17.830  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:17.830  I0000 00:00:1731864964.103405  479074 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:17.830  I0000 00:00:1731864964.105294  479074 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:17.830  {}
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # jq -r '.[0].namespaces | length'
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # [[ 1 -eq 1 ]]
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # jq -r '.[0].namespaces | length'
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # [[ 1 -eq 1 ]]
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # jq -r '.[0].namespaces[0].uuid'
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # [[ 3b7e050b-3932-4521-baca-34878047379c == \3\b\7\e\0\5\0\b\-\3\9\3\2\-\4\5\2\1\-\b\a\c\a\-\3\4\8\7\8\0\4\7\3\7\9\c ]]
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # jq -r '.[0].namespaces[0].uuid'
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # [[ 333e1ddb-657d-4ad0-b146-27223d26a7fe == \3\3\3\e\1\d\d\b\-\6\5\7\d\-\4\a\d\0\-\b\1\4\6\-\2\7\2\2\3\d\2\6\a\7\f\e ]]
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@234 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:17.830   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:17.830     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:17.830     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:17.830     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.830     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.830     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:17.830     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:17.830    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:17.830  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:18.089   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:18.089   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:18.089    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:18.089    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:18.089    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:18.089    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:18.089    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:18.089    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:18.089     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:18.089     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:18.089     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:18.089     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:18.089     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:18.089     18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:18.089    18:36:04 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:18.089  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:18.349   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:15:18.349   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:15:18.349   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@237 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:18.349   18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:18.349    18:36:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:18.349    18:36:04 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:18.607  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:18.607  I0000 00:00:1731864965.095318  479277 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:18.607  I0000 00:00:1731864965.096938  479277 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:18.607  {}
00:15:18.607   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@238 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:18.607   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:18.607    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:18.607    18:36:05 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:19.176  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:19.176  I0000 00:00:1731864965.460745  479359 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:19.176  I0000 00:00:1731864965.462914  479359 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:19.176  {}
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # jq -r '.[0].namespaces | length'
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # [[ 1 -eq 1 ]]
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # jq -r '.[0].namespaces | length'
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # [[ 1 -eq 1 ]]
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # jq -r '.[0].namespaces[0].uuid'
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # [[ 3b7e050b-3932-4521-baca-34878047379c == \3\b\7\e\0\5\0\b\-\3\9\3\2\-\4\5\2\1\-\b\a\c\a\-\3\4\8\7\8\0\4\7\3\7\9\c ]]
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # jq -r '.[0].namespaces[0].uuid'
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # [[ 333e1ddb-657d-4ad0-b146-27223d26a7fe == \3\3\3\e\1\d\d\b\-\6\5\7\d\-\4\a\d\0\-\b\1\4\6\-\2\7\2\2\3\d\2\6\a\7\f\e ]]
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@243 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:19.176   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=3b7e050b-3932-4521-baca-34878047379c
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:19.176     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:19.176     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:19.176     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.176     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.176     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:19.176     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:19.176    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:19.176  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:19.435   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:19.435   18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:19.435    18:36:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:19.435    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:19.435    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.435    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.435    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:19.435    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:19.435     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:19.435     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:19.435     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.435     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.435     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:19.435     18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:19.435    18:36:05 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:19.435  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@244 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:19.694   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:19.694     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:19.694     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:19.694     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.694     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.694     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:19.694     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:19.694    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:19.694  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:19.953   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:19.953   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:19.953    18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme0/nvme*/uuid'
00:15:19.953    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:19.953    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.953    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.953    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:19.953    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:19.953     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:19.953     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:19.953     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:19.953     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:19.953     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:19.953     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:19.953    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme0/nvme*/uuid'
00:15:19.953  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@245 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:20.212   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:20.212     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:20.212     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:20.212     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.212     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.212     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:20.212     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:20.212    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:20.212  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:20.471   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:20.471   18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:20.471    18:36:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:20.471    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:20.471    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.471    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.471    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:20.471    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:20.471     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:20.471     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:20.471     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.471     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.471     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:20.471     18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:20.471    18:36:06 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:20.471  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@246 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:20.471   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=3b7e050b-3932-4521-baca-34878047379c
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:20.471     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:20.471     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:20.471     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.471     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.471     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:20.471     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:20.471    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:20.730  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:20.730   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:20.730   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:20.730    18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme1/nvme*/uuid'
00:15:20.730    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:20.730    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.730    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.730    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:20.730    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:20.730     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:20.730     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:20.730     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.730     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.730     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:20.730     18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:20.730    18:36:07 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme1/nvme*/uuid'
00:15:20.730  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@249 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:20.989   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:20.989    18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:20.989    18:36:07 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:21.247  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:21.247  I0000 00:00:1731864967.762311  480234 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:21.247  I0000 00:00:1731864967.764007  480234 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:21.247  {}
00:15:21.506   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@250 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:21.506   18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:21.506    18:36:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:21.506    18:36:07 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:21.766  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:21.766  I0000 00:00:1731864968.102578  480261 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:21.766  I0000 00:00:1731864968.104451  480261 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:21.766  {}
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # jq -r '.[0].namespaces | length'
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # [[ 1 -eq 1 ]]
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # jq -r '.[0].namespaces | length'
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # [[ 1 -eq 1 ]]
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # jq -r '.[0].namespaces[0].uuid'
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # [[ 3b7e050b-3932-4521-baca-34878047379c == \3\b\7\e\0\5\0\b\-\3\9\3\2\-\4\5\2\1\-\b\a\c\a\-\3\4\8\7\8\0\4\7\3\7\9\c ]]
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # jq -r '.[0].namespaces[0].uuid'
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # [[ 333e1ddb-657d-4ad0-b146-27223d26a7fe == \3\3\3\e\1\d\d\b\-\6\5\7\d\-\4\a\d\0\-\b\1\4\6\-\2\7\2\2\3\d\2\6\a\7\f\e ]]
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@255 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:21.766   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=3b7e050b-3932-4521-baca-34878047379c
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:21.766    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:22.025  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:22.025   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:22.025   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:22.025     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:22.025    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:22.284  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@256 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:22.284   18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:22.284     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:22.284     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:22.284     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.284     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.284     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:22.284     18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:22.284    18:36:08 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:22.284  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:22.542   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:22.542   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:22.542    18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme0/nvme*/uuid'
00:15:22.542    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:22.542    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.542    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.542    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:22.542    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:22.542     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:22.542     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:22.542     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.542     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.542     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:22.542     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:22.542    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme0/nvme*/uuid'
00:15:22.542  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@257 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:22.821   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:22.821    18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:22.821    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:22.821    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.821    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.821    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:22.821    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:22.821    18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:22.821     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:22.821     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:22.821     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.821     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.821     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:22.821     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:22.822    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:22.822  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:23.083     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:23.083     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:23.083     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.083     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.083     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:23.083     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:23.083  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@258 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:23.083   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=3b7e050b-3932-4521-baca-34878047379c
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:23.083    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:23.342  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:23.342   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:23.342   18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme1/nvme*/uuid'
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:23.342     18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:23.342    18:36:09 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme1/nvme*/uuid'
00:15:23.601  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@261 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:23.601   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:23.601    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:23.601    18:36:10 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:23.860  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:23.860  I0000 00:00:1731864970.374338  480762 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:23.860  I0000 00:00:1731864970.376341  480762 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:23.860  {}
00:15:24.118   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@262 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:24.118   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:24.118    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:24.118    18:36:10 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:24.377  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:24.377  I0000 00:00:1731864970.779496  480787 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:24.377  I0000 00:00:1731864970.781384  480787 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:24.377  {}
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # jq -r '.[0].namespaces | length'
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.377   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # [[ 0 -eq 0 ]]
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # jq -r '.[0].namespaces | length'
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:24.377    18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.377   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # [[ 0 -eq 0 ]]
00:15:24.377   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@265 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:24.378   18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=3b7e050b-3932-4521-baca-34878047379c
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:24.378     18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:24.378     18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:24.378     18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.378     18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.378     18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:24.378     18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:24.378    18:36:10 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:24.637  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:24.637   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:24.637   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:24.637    18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:24.637    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:24.637    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.637    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.637    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:24.637    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:24.637     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:24.637     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:24.637     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.637     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.637     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:24.637     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:24.637    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 3b7e050b-3932-4521-baca-34878047379c /sys/class/nvme/nvme0/nvme*/uuid'
00:15:24.896  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:24.896  grep: /sys/class/nvme/nvme0/nvme*/uuid: No such file or directory
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@266 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:24.896   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:24.896     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:24.896     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:24.896     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.896     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.896     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:24.896     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:24.896    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:24.896  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:25.156   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:25.156   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:25.156    18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:25.156    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:25.156    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:25.156    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:25.156    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:25.156    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:25.156     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:25.156     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:25.156     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:25.156     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:25.156     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:25.156     18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:25.156    18:36:11 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 333e1ddb-657d-4ad0-b146-27223d26a7fe /sys/class/nvme/nvme1/nvme*/uuid'
00:15:25.156  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:25.415  grep: /sys/class/nvme/nvme1/nvme*/uuid: No such file or directory
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@269 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:25.415   18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:25.415    18:36:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:25.415    18:36:11 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:25.674  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:25.674  I0000 00:00:1731864972.123463  481121 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:25.674  I0000 00:00:1731864972.125245  481121 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:25.674  {}
00:15:25.674   18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@270 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:25.674   18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:25.674    18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:25.674    18:36:12 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:25.933  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:25.933  I0000 00:00:1731864972.468458  481272 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:25.933  I0000 00:00:1731864972.470190  481272 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:25.933  {}
00:15:26.192   18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@271 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:26.192   18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:26.192    18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 333e1ddb-657d-4ad0-b146-27223d26a7fe
00:15:26.192    18:36:12 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:26.452  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:26.452  I0000 00:00:1731864972.790308  481299 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:26.452  I0000 00:00:1731864972.792004  481299 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:26.452  {}
00:15:26.452   18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@272 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 3b7e050b-3932-4521-baca-34878047379c
00:15:26.452   18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:26.452    18:36:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:26.452    18:36:12 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:26.711  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:26.711  I0000 00:00:1731864973.084044  481325 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:26.711  I0000 00:00:1731864973.085871  481325 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:26.711  {}
00:15:26.711   18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@274 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:26.711   18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:26.970  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:26.970  I0000 00:00:1731864973.336973  481549 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:26.970  I0000 00:00:1731864973.338678  481549 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:26.970  {}
00:15:26.970   18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@275 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:26.970   18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:27.229  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:27.229  I0000 00:00:1731864973.584833  481570 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:27.229  I0000 00:00:1731864973.586481  481570 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:27.229  {}
00:15:27.229    18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # jq -r .handle
00:15:27.229    18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # create_device 42 0
00:15:27.229    18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=42
00:15:27.229    18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:27.229    18:36:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:27.488  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:27.488  I0000 00:00:1731864973.835788  481596 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:27.488  I0000 00:00:1731864973.837700  481596 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:27.488  [2024-11-17 18:36:13.840091] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-42' does not exist
00:15:27.488   18:36:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # device3=nvme:nqn.2016-06.io.spdk:vfiouser-42
00:15:27.488   18:36:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@279 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:27.488   18:36:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:27.747  [2024-11-17 18:36:14.184010] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-42: enabling controller
00:15:28.684    18:36:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:28.684    18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:28.684    18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.684    18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.684    18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:28.684    18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:28.684     18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:28.684     18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:28.684     18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.684     18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.684     18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:28.684     18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:28.684    18:36:15 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:28.684  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:28.684   18:36:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:15:28.684   18:36:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:15:28.684   18:36:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@282 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-42
00:15:28.684   18:36:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:28.943  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:28.943  I0000 00:00:1731864975.456936  481839 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:28.943  I0000 00:00:1731864975.458924  481839 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:28.943  {}
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@283 -- # NOT vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_nqn
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:28.943    18:36:15 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_nqn
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:28.943   18:36:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:30.321     18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:30.321     18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:30.321     18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:30.321     18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:30.321     18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:30.321     18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:30.321  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:30.321  grep: /sys/class/nvme/*/subsysnqn: No such file or directory
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z '' ]]
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@92 -- # error 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@82 -- # echo ===========
00:15:30.321  ===========
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@83 -- # message ERROR 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=ERROR
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:15:30.321  ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@84 -- # echo ===========
00:15:30.321  ===========
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- vhost/common.sh@86 -- # false
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@93 -- # return 1
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:30.321   18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@285 -- # key0=1234567890abcdef1234567890abcdef
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # create_device 0 0
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:30.321    18:36:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # jq -r .handle
00:15:30.580  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:30.580  I0000 00:00:1731864976.938996  482268 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:30.580  I0000 00:00:1731864976.940559  482268 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:30.580  [2024-11-17 18:36:16.948955] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:30.580   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:30.580    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:30.580    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # jq -r '.[].uuid'
00:15:30.580    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:30.580    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:30.580    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:30.580   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # uuid0=3b7e050b-3932-4521-baca-34878047379c
00:15:30.580   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:30.580    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:30.580    18:36:17 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:30.839    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # get_cipher AES_CBC
00:15:30.839    18:36:17 sma.sma_vfiouser_qemu -- sma/common.sh@27 -- # case "$1" in
00:15:30.839    18:36:17 sma.sma_vfiouser_qemu -- sma/common.sh@28 -- # echo 0
00:15:30.839    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # format_key 1234567890abcdef1234567890abcdef
00:15:30.839    18:36:17 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:15:30.839     18:36:17 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:15:30.839  [2024-11-17 18:36:17.260711] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:30.839  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:30.839  I0000 00:00:1731864977.387349  482305 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:30.839  I0000 00:00:1731864977.389198  482305 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:31.098  {}
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # jq -r '.[0].namespaces[0].name'
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.098   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # ns_bdev=5ce0cefc-70e3-4e16-b1b2-fc69d11e001e
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # rpc_cmd bdev_get_bdevs -b 5ce0cefc-70e3-4e16-b1b2-fc69d11e001e
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # jq -r '.[0].product_name'
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.098   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # [[ crypto == \c\r\y\p\t\o ]]
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # rpc_cmd bdev_get_bdevs -b 5ce0cefc-70e3-4e16-b1b2-fc69d11e001e
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # jq -r '.[] | select(.product_name == "crypto")'
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.098   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # crypto_bdev='{
00:15:31.098    "name": "5ce0cefc-70e3-4e16-b1b2-fc69d11e001e",
00:15:31.098    "aliases": [
00:15:31.098      "27cfa553-04ed-5fc2-9f56-87884a23fb0a"
00:15:31.098    ],
00:15:31.098    "product_name": "crypto",
00:15:31.098    "block_size": 4096,
00:15:31.098    "num_blocks": 25600,
00:15:31.098    "uuid": "27cfa553-04ed-5fc2-9f56-87884a23fb0a",
00:15:31.098    "assigned_rate_limits": {
00:15:31.098      "rw_ios_per_sec": 0,
00:15:31.098      "rw_mbytes_per_sec": 0,
00:15:31.098      "r_mbytes_per_sec": 0,
00:15:31.098      "w_mbytes_per_sec": 0
00:15:31.098    },
00:15:31.098    "claimed": true,
00:15:31.098    "claim_type": "exclusive_write",
00:15:31.098    "zoned": false,
00:15:31.098    "supported_io_types": {
00:15:31.098      "read": true,
00:15:31.098      "write": true,
00:15:31.098      "unmap": false,
00:15:31.098      "flush": false,
00:15:31.098      "reset": true,
00:15:31.098      "nvme_admin": false,
00:15:31.098      "nvme_io": false,
00:15:31.098      "nvme_io_md": false,
00:15:31.098      "write_zeroes": true,
00:15:31.098      "zcopy": false,
00:15:31.098      "get_zone_info": false,
00:15:31.098      "zone_management": false,
00:15:31.098      "zone_append": false,
00:15:31.098      "compare": false,
00:15:31.098      "compare_and_write": false,
00:15:31.098      "abort": false,
00:15:31.098      "seek_hole": false,
00:15:31.098      "seek_data": false,
00:15:31.098      "copy": false,
00:15:31.098      "nvme_iov_md": false
00:15:31.098    },
00:15:31.098    "memory_domains": [
00:15:31.098      {
00:15:31.098        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:31.098        "dma_device_type": 2
00:15:31.098      }
00:15:31.098    ],
00:15:31.098    "driver_specific": {
00:15:31.098      "crypto": {
00:15:31.098        "base_bdev_name": "null0",
00:15:31.098        "name": "5ce0cefc-70e3-4e16-b1b2-fc69d11e001e",
00:15:31.098        "key_name": "5ce0cefc-70e3-4e16-b1b2-fc69d11e001e_AES_CBC"
00:15:31.098      }
00:15:31.098    }
00:15:31.098  }'
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # rpc_cmd bdev_get_bdevs
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.098   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # [[ 1 -eq 1 ]]
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # jq -r .driver_specific.crypto.key_name
00:15:31.098   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # key_name=5ce0cefc-70e3-4e16-b1b2-fc69d11e001e_AES_CBC
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # rpc_cmd accel_crypto_keys_get -k 5ce0cefc-70e3-4e16-b1b2-fc69d11e001e_AES_CBC
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.098   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # key_obj='[
00:15:31.098  {
00:15:31.098  "name": "5ce0cefc-70e3-4e16-b1b2-fc69d11e001e_AES_CBC",
00:15:31.098  "cipher": "AES_CBC",
00:15:31.098  "key": "1234567890abcdef1234567890abcdef"
00:15:31.098  }
00:15:31.098  ]'
00:15:31.098    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # jq -r '.[0].key'
00:15:31.357   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:15:31.357    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # jq -r '.[0].cipher'
00:15:31.357   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:15:31.357   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@317 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:31.357   18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:31.357    18:36:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:31.357    18:36:17 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:31.616  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:31.616  I0000 00:00:1731864978.074177  482452 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:31.616  I0000 00:00:1731864978.076097  482452 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:31.616  {}
00:15:31.616   18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@318 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:31.616   18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:31.875  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:31.875  I0000 00:00:1731864978.349599  482578 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:31.875  I0000 00:00:1731864978.351249  482578 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:31.875  {}
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # rpc_cmd bdev_get_bdevs
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r '.[] | select(.product_name == "crypto")'
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r length
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:31.875   18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # [[ '' -eq 0 ]]
00:15:31.875   18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@322 -- # device_vfio_user=1
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # create_device 0 0
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:31.875    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # jq -r .handle
00:15:32.133  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:32.133  I0000 00:00:1731864978.620901  482606 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:32.133  I0000 00:00:1731864978.622387  482606 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:32.133  [2024-11-17 18:36:18.625820] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:32.392   18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:32.392   18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@324 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:32.392   18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:32.392    18:36:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:32.392    18:36:18 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:32.392  [2024-11-17 18:36:18.937556] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:32.651  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:32.651  I0000 00:00:1731864979.080751  482631 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:32.651  I0000 00:00:1731864979.082681  482631 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:32.651  {}
00:15:32.651    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:15:32.651   18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # diff /dev/fd/62 /dev/fd/61
00:15:32.651    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:15:32.651    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # get_qos_caps 1
00:15:32.651    18:36:19 sma.sma_vfiouser_qemu -- sma/common.sh@45 -- # local rootdir
00:15:32.651     18:36:19 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:32.651    18:36:19 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:15:32.651    18:36:19 sma.sma_vfiouser_qemu -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:15:32.909  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:32.909  I0000 00:00:1731864979.362829  482863 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:32.909  I0000 00:00:1731864979.364694  482863 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:32.909   18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:32.909    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:32.909    18:36:19 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:33.167  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:33.167  I0000 00:00:1731864979.622482  482885 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:33.167  I0000 00:00:1731864979.624056  482885 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:33.167  {}
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys
00:15:33.167   18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # diff /dev/fd/62 /dev/fd/61
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys '.[].assigned_rate_limits'
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:33.167   18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@370 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 3b7e050b-3932-4521-baca-34878047379c
00:15:33.167   18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 3b7e050b-3932-4521-baca-34878047379c
00:15:33.167    18:36:19 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:33.425  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:33.425  I0000 00:00:1731864979.951333  482914 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:33.425  I0000 00:00:1731864979.952718  482914 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:33.425  {}
00:15:33.684   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@371 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:33.684   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:33.684  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:33.684  I0000 00:00:1731864980.231942  482939 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:33.684  I0000 00:00:1731864980.233855  482939 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:33.684  {}
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@373 -- # cleanup
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@98 -- # vm_kill_all
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 0
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@449 -- # local vm_pid
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # vm_pid=472750
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=472750)'
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=472750)'
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=472750)'
00:15:33.944  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=472750)
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@454 -- # /bin/kill 472750
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@455 -- # notice 'process 472750 killed'
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'process 472750 killed'
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: process 472750 killed'
00:15:33.944  INFO: process 472750 killed
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@99 -- # killprocess 476727
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 476727 ']'
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 476727
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:33.944    18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476727
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476727'
00:15:33.944  killing process with pid 476727
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 476727
00:15:33.944   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 476727
00:15:34.203   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@100 -- # killprocess 476952
00:15:34.203   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 476952 ']'
00:15:34.203   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 476952
00:15:34.203    18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:15:34.203   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:34.203    18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476952
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=python3
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476952'
00:15:34.463  killing process with pid 476952
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 476952
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 476952
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # rm -rf /tmp/sma/vfio-user/qemu
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@374 -- # trap - SIGINT SIGTERM EXIT
00:15:34.463  
00:15:34.463  real	0m50.914s
00:15:34.463  user	0m38.032s
00:15:34.463  sys	0m3.552s
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:34.463   18:36:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:34.463  ************************************
00:15:34.463  END TEST sma_vfiouser_qemu
00:15:34.463  ************************************
00:15:34.463   18:36:20 sma -- sma/sma.sh@13 -- # run_test sma_plugins /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:15:34.463   18:36:20 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:34.463   18:36:20 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:34.463   18:36:20 sma -- common/autotest_common.sh@10 -- # set +x
00:15:34.463  ************************************
00:15:34.463  START TEST sma_plugins
00:15:34.463  ************************************
00:15:34.463   18:36:20 sma.sma_plugins -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:15:34.463  * Looking for test storage...
00:15:34.463  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:34.463    18:36:20 sma.sma_plugins -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:34.463     18:36:20 sma.sma_plugins -- common/autotest_common.sh@1693 -- # lcov --version
00:15:34.463     18:36:20 sma.sma_plugins -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:34.463    18:36:21 sma.sma_plugins -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@336 -- # IFS=.-:
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@336 -- # read -ra ver1
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@337 -- # IFS=.-:
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@337 -- # read -ra ver2
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@338 -- # local 'op=<'
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@340 -- # ver1_l=2
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@341 -- # ver2_l=1
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@344 -- # case "$op" in
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@345 -- # : 1
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@365 -- # decimal 1
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@353 -- # local d=1
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@355 -- # echo 1
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@365 -- # ver1[v]=1
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@366 -- # decimal 2
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@353 -- # local d=2
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:34.463     18:36:21 sma.sma_plugins -- scripts/common.sh@355 -- # echo 2
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@366 -- # ver2[v]=2
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:34.463    18:36:21 sma.sma_plugins -- scripts/common.sh@368 -- # return 0
00:15:34.463    18:36:21 sma.sma_plugins -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:34.463    18:36:21 sma.sma_plugins -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:34.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.463  		--rc genhtml_branch_coverage=1
00:15:34.463  		--rc genhtml_function_coverage=1
00:15:34.463  		--rc genhtml_legend=1
00:15:34.463  		--rc geninfo_all_blocks=1
00:15:34.463  		--rc geninfo_unexecuted_blocks=1
00:15:34.463  		
00:15:34.463  		'
00:15:34.463    18:36:21 sma.sma_plugins -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:34.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.463  		--rc genhtml_branch_coverage=1
00:15:34.463  		--rc genhtml_function_coverage=1
00:15:34.463  		--rc genhtml_legend=1
00:15:34.463  		--rc geninfo_all_blocks=1
00:15:34.463  		--rc geninfo_unexecuted_blocks=1
00:15:34.463  		
00:15:34.463  		'
00:15:34.463    18:36:21 sma.sma_plugins -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:34.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.463  		--rc genhtml_branch_coverage=1
00:15:34.463  		--rc genhtml_function_coverage=1
00:15:34.463  		--rc genhtml_legend=1
00:15:34.463  		--rc geninfo_all_blocks=1
00:15:34.463  		--rc geninfo_unexecuted_blocks=1
00:15:34.463  		
00:15:34.464  		'
00:15:34.464    18:36:21 sma.sma_plugins -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:34.464  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.464  		--rc genhtml_branch_coverage=1
00:15:34.464  		--rc genhtml_function_coverage=1
00:15:34.464  		--rc genhtml_legend=1
00:15:34.464  		--rc geninfo_all_blocks=1
00:15:34.464  		--rc geninfo_unexecuted_blocks=1
00:15:34.464  		
00:15:34.464  		'
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@28 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@30 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@31 -- # tgtpid=483237
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@43 -- # smapid=483238
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@45 -- # sma_waitforlisten
00:15:34.464   18:36:21 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@34 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:34.464   18:36:21 sma.sma_plugins -- sma/plugins.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:34.464   18:36:21 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:34.464   18:36:21 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:34.464    18:36:21 sma.sma_plugins -- sma/plugins.sh@34 -- # cat
00:15:34.464   18:36:21 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:34.464   18:36:21 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:34.723   18:36:21 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:34.723  [2024-11-17 18:36:21.120546] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:15:34.723  [2024-11-17 18:36:21.120666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483237 ]
00:15:34.724  EAL: No free 2048 kB hugepages reported on node 1
00:15:34.724  [2024-11-17 18:36:21.229635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:34.724  [2024-11-17 18:36:21.265982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:35.660   18:36:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:35.660   18:36:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:35.660   18:36:22 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:35.660   18:36:22 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:35.920  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:35.920  I0000 00:00:1731864982.256250  483238 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:36.857   18:36:23 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:36.857   18:36:23 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:36.857   18:36:23 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:36.857   18:36:23 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:36.857    18:36:23 sma.sma_plugins -- sma/plugins.sh@47 -- # create_device nvme
00:15:36.857    18:36:23 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:36.857    18:36:23 sma.sma_plugins -- sma/plugins.sh@47 -- # jq -r .handle
00:15:36.857  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:36.857  I0000 00:00:1731864983.316888  483679 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:36.857  I0000 00:00:1731864983.318835  483679 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:36.858   18:36:23 sma.sma_plugins -- sma/plugins.sh@47 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:36.858    18:36:23 sma.sma_plugins -- sma/plugins.sh@48 -- # create_device nvmf_tcp
00:15:36.858    18:36:23 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:36.858    18:36:23 sma.sma_plugins -- sma/plugins.sh@48 -- # jq -r .handle
00:15:37.115  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:37.115  I0000 00:00:1731864983.559583  483715 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:37.115  I0000 00:00:1731864983.561093  483715 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:37.115   18:36:23 sma.sma_plugins -- sma/plugins.sh@48 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:37.115   18:36:23 sma.sma_plugins -- sma/plugins.sh@50 -- # killprocess 483238
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 483238 ']'
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 483238
00:15:37.115    18:36:23 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:37.115    18:36:23 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483238
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483238'
00:15:37.115  killing process with pid 483238
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 483238
00:15:37.115   18:36:23 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 483238
00:15:37.115   18:36:23 sma.sma_plugins -- sma/plugins.sh@61 -- # smapid=483839
00:15:37.115   18:36:23 sma.sma_plugins -- sma/plugins.sh@62 -- # sma_waitforlisten
00:15:37.115   18:36:23 sma.sma_plugins -- sma/plugins.sh@53 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:37.115   18:36:23 sma.sma_plugins -- sma/plugins.sh@53 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:37.115   18:36:23 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:37.115    18:36:23 sma.sma_plugins -- sma/plugins.sh@53 -- # cat
00:15:37.115   18:36:23 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:37.115   18:36:23 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:37.115   18:36:23 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:37.115   18:36:23 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:37.373   18:36:23 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:37.373  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:37.373  I0000 00:00:1731864983.908264  483839 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:38.308   18:36:24 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:38.309   18:36:24 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:38.309   18:36:24 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:38.309   18:36:24 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:38.309    18:36:24 sma.sma_plugins -- sma/plugins.sh@64 -- # create_device nvmf_tcp
00:15:38.309    18:36:24 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:38.309    18:36:24 sma.sma_plugins -- sma/plugins.sh@64 -- # jq -r .handle
00:15:38.568  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:38.568  I0000 00:00:1731864984.943733  483978 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:38.568  I0000 00:00:1731864984.945570  483978 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:38.568   18:36:24 sma.sma_plugins -- sma/plugins.sh@64 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:38.568   18:36:24 sma.sma_plugins -- sma/plugins.sh@65 -- # NOT create_device nvme
00:15:38.568   18:36:24 sma.sma_plugins -- common/autotest_common.sh@652 -- # local es=0
00:15:38.568   18:36:24 sma.sma_plugins -- common/autotest_common.sh@654 -- # valid_exec_arg create_device nvme
00:15:38.568   18:36:24 sma.sma_plugins -- common/autotest_common.sh@640 -- # local arg=create_device
00:15:38.568   18:36:24 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:38.568    18:36:24 sma.sma_plugins -- common/autotest_common.sh@644 -- # type -t create_device
00:15:38.568   18:36:24 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:38.568   18:36:24 sma.sma_plugins -- common/autotest_common.sh@655 -- # create_device nvme
00:15:38.568   18:36:24 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:38.827  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:38.827  I0000 00:00:1731864985.171698  484193 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:38.827  I0000 00:00:1731864985.173340  484193 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:38.827  Traceback (most recent call last):
00:15:38.827    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:38.827      main(sys.argv[1:])
00:15:38.827    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:38.827      result = client.call(request['method'], request.get('params', {}))
00:15:38.827               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:38.827    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:38.827      response = func(request=json_format.ParseDict(params, input()))
00:15:38.827                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:38.827    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:38.827      return _end_unary_response_blocking(state, call, False, None)
00:15:38.827             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:38.827    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:38.827      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:38.827      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:38.827  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:38.827  	status = StatusCode.INVALID_ARGUMENT
00:15:38.827  	details = "Unsupported device type"
00:15:38.827  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Unsupported device type", grpc_status:3, created_time:"2024-11-17T18:36:25.175287967+01:00"}"
00:15:38.827  >
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@655 -- # es=1
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:38.827   18:36:25 sma.sma_plugins -- sma/plugins.sh@67 -- # killprocess 483839
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 483839 ']'
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 483839
00:15:38.827    18:36:25 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:38.827    18:36:25 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483839
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483839'
00:15:38.827  killing process with pid 483839
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 483839
00:15:38.827   18:36:25 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 483839
00:15:38.827   18:36:25 sma.sma_plugins -- sma/plugins.sh@80 -- # smapid=484229
00:15:38.827   18:36:25 sma.sma_plugins -- sma/plugins.sh@81 -- # sma_waitforlisten
00:15:38.827    18:36:25 sma.sma_plugins -- sma/plugins.sh@70 -- # cat
00:15:38.827   18:36:25 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:38.827   18:36:25 sma.sma_plugins -- sma/plugins.sh@70 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:38.827   18:36:25 sma.sma_plugins -- sma/plugins.sh@70 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:38.827   18:36:25 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:38.827   18:36:25 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:38.827   18:36:25 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:38.827   18:36:25 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:38.827   18:36:25 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:39.086  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:39.086  I0000 00:00:1731864985.496540  484229 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:40.022   18:36:26 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:40.022   18:36:26 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:40.022   18:36:26 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:40.022   18:36:26 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:40.022    18:36:26 sma.sma_plugins -- sma/plugins.sh@83 -- # create_device nvme
00:15:40.022    18:36:26 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:40.022    18:36:26 sma.sma_plugins -- sma/plugins.sh@83 -- # jq -r .handle
00:15:40.022  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:40.023  I0000 00:00:1731864986.535951  484464 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:40.023  I0000 00:00:1731864986.538143  484464 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:40.023   18:36:26 sma.sma_plugins -- sma/plugins.sh@83 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:40.023    18:36:26 sma.sma_plugins -- sma/plugins.sh@84 -- # create_device nvmf_tcp
00:15:40.023    18:36:26 sma.sma_plugins -- sma/plugins.sh@84 -- # jq -r .handle
00:15:40.023    18:36:26 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:40.282  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:40.282  I0000 00:00:1731864986.766957  484492 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:40.282  I0000 00:00:1731864986.768533  484492 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:40.282   18:36:26 sma.sma_plugins -- sma/plugins.sh@84 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:40.282   18:36:26 sma.sma_plugins -- sma/plugins.sh@86 -- # killprocess 484229
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 484229 ']'
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 484229
00:15:40.282    18:36:26 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:40.282    18:36:26 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484229
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484229'
00:15:40.282  killing process with pid 484229
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 484229
00:15:40.282   18:36:26 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 484229
00:15:40.540   18:36:26 sma.sma_plugins -- sma/plugins.sh@99 -- # smapid=484522
00:15:40.540   18:36:26 sma.sma_plugins -- sma/plugins.sh@89 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:40.540   18:36:26 sma.sma_plugins -- sma/plugins.sh@89 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:40.540   18:36:26 sma.sma_plugins -- sma/plugins.sh@100 -- # sma_waitforlisten
00:15:40.540    18:36:26 sma.sma_plugins -- sma/plugins.sh@89 -- # cat
00:15:40.540   18:36:26 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:40.540   18:36:26 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:40.540   18:36:26 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:40.540   18:36:26 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:40.540   18:36:26 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:40.540   18:36:26 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:40.540  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:40.541  I0000 00:00:1731864987.088988  484522 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:41.477   18:36:27 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:41.477   18:36:27 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:41.477   18:36:27 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:41.477   18:36:27 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:41.477    18:36:27 sma.sma_plugins -- sma/plugins.sh@102 -- # create_device nvme
00:15:41.477    18:36:27 sma.sma_plugins -- sma/plugins.sh@102 -- # jq -r .handle
00:15:41.477    18:36:27 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:41.736  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:41.736  I0000 00:00:1731864988.134658  484755 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:41.736  I0000 00:00:1731864988.136461  484755 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:41.736   18:36:28 sma.sma_plugins -- sma/plugins.sh@102 -- # [[ nvme:plugin2-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:41.736    18:36:28 sma.sma_plugins -- sma/plugins.sh@103 -- # create_device nvmf_tcp
00:15:41.736    18:36:28 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:41.736    18:36:28 sma.sma_plugins -- sma/plugins.sh@103 -- # jq -r .handle
00:15:41.995  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:41.995  I0000 00:00:1731864988.368374  484783 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:41.995  I0000 00:00:1731864988.370133  484783 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:41.995   18:36:28 sma.sma_plugins -- sma/plugins.sh@103 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:41.995   18:36:28 sma.sma_plugins -- sma/plugins.sh@105 -- # killprocess 484522
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 484522 ']'
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 484522
00:15:41.995    18:36:28 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:41.995    18:36:28 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484522
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484522'
00:15:41.995  killing process with pid 484522
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 484522
00:15:41.995   18:36:28 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 484522
00:15:41.995   18:36:28 sma.sma_plugins -- sma/plugins.sh@118 -- # smapid=484982
00:15:41.995   18:36:28 sma.sma_plugins -- sma/plugins.sh@119 -- # sma_waitforlisten
00:15:41.995   18:36:28 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:41.995   18:36:28 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:41.995   18:36:28 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:41.995   18:36:28 sma.sma_plugins -- sma/plugins.sh@108 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:41.995   18:36:28 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:41.995   18:36:28 sma.sma_plugins -- sma/plugins.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:41.995   18:36:28 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:41.995    18:36:28 sma.sma_plugins -- sma/plugins.sh@108 -- # cat
00:15:41.995   18:36:28 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:42.254  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:42.254  I0000 00:00:1731864988.693450  484982 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.190   18:36:29 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:43.190   18:36:29 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:43.190   18:36:29 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:43.190   18:36:29 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:43.190    18:36:29 sma.sma_plugins -- sma/plugins.sh@121 -- # create_device nvme
00:15:43.190    18:36:29 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:43.190    18:36:29 sma.sma_plugins -- sma/plugins.sh@121 -- # jq -r .handle
00:15:43.190  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.190  I0000 00:00:1731864989.725748  485063 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.190  I0000 00:00:1731864989.727464  485063 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:43.190   18:36:29 sma.sma_plugins -- sma/plugins.sh@121 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:43.190    18:36:29 sma.sma_plugins -- sma/plugins.sh@122 -- # create_device nvmf_tcp
00:15:43.190    18:36:29 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:43.190    18:36:29 sma.sma_plugins -- sma/plugins.sh@122 -- # jq -r .handle
00:15:43.449  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.449  I0000 00:00:1731864989.953786  485271 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.449  I0000 00:00:1731864989.955422  485271 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:43.449   18:36:29 sma.sma_plugins -- sma/plugins.sh@122 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:43.449   18:36:29 sma.sma_plugins -- sma/plugins.sh@124 -- # killprocess 484982
00:15:43.449   18:36:29 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 484982 ']'
00:15:43.449   18:36:29 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 484982
00:15:43.449    18:36:29 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:43.449   18:36:29 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:43.449    18:36:29 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484982
00:15:43.449   18:36:30 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:43.449   18:36:30 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:43.449   18:36:30 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484982'
00:15:43.449  killing process with pid 484982
00:15:43.449   18:36:30 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 484982
00:15:43.449   18:36:30 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 484982
00:15:43.707   18:36:30 sma.sma_plugins -- sma/plugins.sh@134 -- # smapid=485301
00:15:43.707   18:36:30 sma.sma_plugins -- sma/plugins.sh@127 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:43.707   18:36:30 sma.sma_plugins -- sma/plugins.sh@135 -- # sma_waitforlisten
00:15:43.707   18:36:30 sma.sma_plugins -- sma/plugins.sh@127 -- # SMA_PLUGINS=plugin1:plugin2
00:15:43.707    18:36:30 sma.sma_plugins -- sma/plugins.sh@127 -- # cat
00:15:43.707   18:36:30 sma.sma_plugins -- sma/plugins.sh@127 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:43.707   18:36:30 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:43.707   18:36:30 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:43.707   18:36:30 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:43.707   18:36:30 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:43.707   18:36:30 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:43.707   18:36:30 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:43.707  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.707  I0000 00:00:1731864990.269823  485301 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:44.645   18:36:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:44.645   18:36:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:44.645   18:36:31 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:44.645   18:36:31 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:44.645    18:36:31 sma.sma_plugins -- sma/plugins.sh@137 -- # create_device nvme
00:15:44.645    18:36:31 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:44.645    18:36:31 sma.sma_plugins -- sma/plugins.sh@137 -- # jq -r .handle
00:15:44.904  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:44.904  I0000 00:00:1731864991.308918  485534 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:44.904  I0000 00:00:1731864991.310812  485534 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:44.904   18:36:31 sma.sma_plugins -- sma/plugins.sh@137 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:44.904    18:36:31 sma.sma_plugins -- sma/plugins.sh@138 -- # create_device nvmf_tcp
00:15:44.904    18:36:31 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:44.904    18:36:31 sma.sma_plugins -- sma/plugins.sh@138 -- # jq -r .handle
00:15:45.164  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:45.164  I0000 00:00:1731864991.540374  485565 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:45.164  I0000 00:00:1731864991.541824  485565 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:45.164   18:36:31 sma.sma_plugins -- sma/plugins.sh@138 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:45.164   18:36:31 sma.sma_plugins -- sma/plugins.sh@140 -- # killprocess 485301
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 485301 ']'
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 485301
00:15:45.164    18:36:31 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:45.164    18:36:31 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485301
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485301'
00:15:45.164  killing process with pid 485301
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 485301
00:15:45.164   18:36:31 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 485301
00:15:45.164   18:36:31 sma.sma_plugins -- sma/plugins.sh@152 -- # smapid=485595
00:15:45.164   18:36:31 sma.sma_plugins -- sma/plugins.sh@143 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:45.164   18:36:31 sma.sma_plugins -- sma/plugins.sh@143 -- # SMA_PLUGINS=plugin1
00:15:45.164   18:36:31 sma.sma_plugins -- sma/plugins.sh@143 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:45.164   18:36:31 sma.sma_plugins -- sma/plugins.sh@153 -- # sma_waitforlisten
00:15:45.164    18:36:31 sma.sma_plugins -- sma/plugins.sh@143 -- # cat
00:15:45.164   18:36:31 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:45.164   18:36:31 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:45.164   18:36:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:45.164   18:36:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:45.164   18:36:31 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:45.164   18:36:31 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:45.424  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:45.424  I0000 00:00:1731864991.850131  485595 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:46.372   18:36:32 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:46.372   18:36:32 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:46.372   18:36:32 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:46.372   18:36:32 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:46.372    18:36:32 sma.sma_plugins -- sma/plugins.sh@155 -- # create_device nvme
00:15:46.372    18:36:32 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.372    18:36:32 sma.sma_plugins -- sma/plugins.sh@155 -- # jq -r .handle
00:15:46.372  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:46.373  I0000 00:00:1731864992.885972  485830 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:46.373  I0000 00:00:1731864992.887835  485830 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:46.373   18:36:32 sma.sma_plugins -- sma/plugins.sh@155 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:46.373    18:36:32 sma.sma_plugins -- sma/plugins.sh@156 -- # create_device nvmf_tcp
00:15:46.373    18:36:32 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.373    18:36:32 sma.sma_plugins -- sma/plugins.sh@156 -- # jq -r .handle
00:15:46.632  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:46.632  I0000 00:00:1731864993.113574  485858 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:46.632  I0000 00:00:1731864993.115293  485858 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:46.632   18:36:33 sma.sma_plugins -- sma/plugins.sh@156 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:46.632   18:36:33 sma.sma_plugins -- sma/plugins.sh@158 -- # killprocess 485595
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 485595 ']'
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 485595
00:15:46.632    18:36:33 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:46.632    18:36:33 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485595
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485595'
00:15:46.632  killing process with pid 485595
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 485595
00:15:46.632   18:36:33 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 485595
00:15:46.891   18:36:33 sma.sma_plugins -- sma/plugins.sh@161 -- # crypto_engines=(crypto-plugin1 crypto-plugin2)
00:15:46.891   18:36:33 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:15:46.891   18:36:33 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=486030
00:15:46.891   18:36:33 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:46.891   18:36:33 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:15:46.891    18:36:33 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:15:46.891   18:36:33 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:46.891   18:36:33 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:46.891   18:36:33 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:46.891   18:36:33 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:46.891   18:36:33 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:46.891   18:36:33 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:46.891   18:36:33 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:46.891  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:46.891  I0000 00:00:1731864993.426443  486030 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:47.828   18:36:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:47.828   18:36:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:47.828   18:36:34 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:47.828   18:36:34 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:47.828    18:36:34 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:15:47.828    18:36:34 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:47.828    18:36:34 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:15:48.087  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:48.087  I0000 00:00:1731864994.502733  486133 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:48.087  I0000 00:00:1731864994.504440  486133 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:48.087   18:36:34 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin1 == nvme:plugin1-device1:crypto-plugin1 ]]
00:15:48.087    18:36:34 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:15:48.087    18:36:34 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:15:48.087    18:36:34 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.346  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:48.346  I0000 00:00:1731864994.723019  486345 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:48.346  I0000 00:00:1731864994.724500  486345 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:48.346   18:36:34 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin1 == nvmf_tcp:plugin2-device2:crypto-plugin1 ]]
00:15:48.346   18:36:34 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 486030
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 486030 ']'
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 486030
00:15:48.346    18:36:34 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:48.346    18:36:34 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486030
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486030'
00:15:48.346  killing process with pid 486030
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 486030
00:15:48.346   18:36:34 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 486030
00:15:48.346   18:36:34 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:15:48.346   18:36:34 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=486374
00:15:48.346    18:36:34 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:15:48.346   18:36:34 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:15:48.346   18:36:34 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:48.346   18:36:34 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:48.346   18:36:34 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:48.346   18:36:34 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:48.346   18:36:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:48.346   18:36:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:48.346   18:36:34 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:48.346   18:36:34 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:48.605  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:48.605  I0000 00:00:1731864995.025605  486374 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:49.543   18:36:35 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:49.543   18:36:35 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:49.543   18:36:35 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:49.543   18:36:35 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:49.543    18:36:35 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:15:49.543    18:36:35 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:15:49.543    18:36:35 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:49.543  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:49.543  I0000 00:00:1731864996.070310  486609 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:49.543  I0000 00:00:1731864996.072171  486609 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:49.543   18:36:36 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin2 == nvme:plugin1-device1:crypto-plugin2 ]]
00:15:49.543    18:36:36 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:15:49.543    18:36:36 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:15:49.543    18:36:36 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:49.803  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:49.803  I0000 00:00:1731864996.283521  486637 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:49.803  I0000 00:00:1731864996.285178  486637 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:49.803   18:36:36 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin2 == nvmf_tcp:plugin2-device2:crypto-plugin2 ]]
00:15:49.803   18:36:36 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 486374
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 486374 ']'
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 486374
00:15:49.803    18:36:36 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:49.803    18:36:36 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486374
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486374'
00:15:49.803  killing process with pid 486374
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 486374
00:15:49.803   18:36:36 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 486374
00:15:50.063   18:36:36 sma.sma_plugins -- sma/plugins.sh@184 -- # cleanup
00:15:50.063   18:36:36 sma.sma_plugins -- sma/plugins.sh@13 -- # killprocess 483237
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 483237 ']'
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 483237
00:15:50.063    18:36:36 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:50.063    18:36:36 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483237
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483237'
00:15:50.063  killing process with pid 483237
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 483237
00:15:50.063   18:36:36 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 483237
00:15:50.323   18:36:36 sma.sma_plugins -- sma/plugins.sh@14 -- # killprocess 486374
00:15:50.323   18:36:36 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 486374 ']'
00:15:50.323   18:36:36 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 486374
00:15:50.323  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (486374) - No such process
00:15:50.323   18:36:36 sma.sma_plugins -- common/autotest_common.sh@981 -- # echo 'Process with pid 486374 is not found'
00:15:50.323  Process with pid 486374 is not found
00:15:50.323   18:36:36 sma.sma_plugins -- sma/plugins.sh@185 -- # trap - SIGINT SIGTERM EXIT
00:15:50.323  
00:15:50.323  real	0m15.950s
00:15:50.323  user	0m22.197s
00:15:50.323  sys	0m1.823s
00:15:50.323   18:36:36 sma.sma_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:50.323   18:36:36 sma.sma_plugins -- common/autotest_common.sh@10 -- # set +x
00:15:50.323  ************************************
00:15:50.323  END TEST sma_plugins
00:15:50.323  ************************************
00:15:50.323   18:36:36 sma -- sma/sma.sh@14 -- # run_test sma_discovery /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:15:50.323   18:36:36 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:50.323   18:36:36 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:50.323   18:36:36 sma -- common/autotest_common.sh@10 -- # set +x
00:15:50.323  ************************************
00:15:50.323  START TEST sma_discovery
00:15:50.323  ************************************
00:15:50.323   18:36:36 sma.sma_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:15:50.583  * Looking for test storage...
00:15:50.583  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:50.583    18:36:36 sma.sma_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:50.583     18:36:36 sma.sma_discovery -- common/autotest_common.sh@1693 -- # lcov --version
00:15:50.583     18:36:36 sma.sma_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:50.583    18:36:36 sma.sma_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@344 -- # case "$op" in
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@345 -- # : 1
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@365 -- # decimal 1
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@353 -- # local d=1
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@355 -- # echo 1
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@366 -- # decimal 2
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@353 -- # local d=2
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:50.583     18:36:36 sma.sma_discovery -- scripts/common.sh@355 -- # echo 2
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:50.583    18:36:36 sma.sma_discovery -- scripts/common.sh@368 -- # return 0
00:15:50.583    18:36:36 sma.sma_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:50.583    18:36:36 sma.sma_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:50.583  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.583  		--rc genhtml_branch_coverage=1
00:15:50.583  		--rc genhtml_function_coverage=1
00:15:50.583  		--rc genhtml_legend=1
00:15:50.583  		--rc geninfo_all_blocks=1
00:15:50.583  		--rc geninfo_unexecuted_blocks=1
00:15:50.583  		
00:15:50.583  		'
00:15:50.583    18:36:36 sma.sma_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:50.583  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.583  		--rc genhtml_branch_coverage=1
00:15:50.583  		--rc genhtml_function_coverage=1
00:15:50.583  		--rc genhtml_legend=1
00:15:50.583  		--rc geninfo_all_blocks=1
00:15:50.583  		--rc geninfo_unexecuted_blocks=1
00:15:50.583  		
00:15:50.583  		'
00:15:50.583    18:36:36 sma.sma_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:50.583  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.583  		--rc genhtml_branch_coverage=1
00:15:50.583  		--rc genhtml_function_coverage=1
00:15:50.583  		--rc genhtml_legend=1
00:15:50.583  		--rc geninfo_all_blocks=1
00:15:50.583  		--rc geninfo_unexecuted_blocks=1
00:15:50.583  		
00:15:50.583  		'
00:15:50.583    18:36:36 sma.sma_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:50.583  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.583  		--rc genhtml_branch_coverage=1
00:15:50.583  		--rc genhtml_function_coverage=1
00:15:50.583  		--rc genhtml_legend=1
00:15:50.583  		--rc geninfo_all_blocks=1
00:15:50.583  		--rc geninfo_unexecuted_blocks=1
00:15:50.583  		
00:15:50.583  		'
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@12 -- # sma_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@15 -- # t1sock=/var/tmp/spdk.sock1
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@16 -- # t2sock=/var/tmp/spdk.sock2
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@17 -- # invalid_port=8008
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@18 -- # t1dscport=8009
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@19 -- # t2dscport1=8010
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@20 -- # t2dscport2=8011
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@21 -- # t1nqn=nqn.2016-06.io.spdk:node1
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@22 -- # t2nqn=nqn.2016-06.io.spdk:node2
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@24 -- # cleanup_period=1
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@132 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock1 -m 0x1
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@136 -- # t1pid=486923
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@138 -- # t2pid=486924
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@137 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@141 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x4
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@142 -- # tgtpid=486925
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@153 -- # smapid=486926
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@155 -- # waitforlisten 486925
00:15:50.583   18:36:36 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 486925 ']'
00:15:50.583   18:36:36 sma.sma_discovery -- sma/discovery.sh@145 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:50.583   18:36:36 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:50.583   18:36:36 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:50.583    18:36:36 sma.sma_discovery -- sma/discovery.sh@145 -- # cat
00:15:50.583   18:36:36 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:50.583  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:50.583   18:36:36 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:50.583   18:36:36 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:50.583  [2024-11-17 18:36:37.081225] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:15:50.583  [2024-11-17 18:36:37.081307] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:15:50.583  [2024-11-17 18:36:37.081351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486923 ]
00:15:50.583  [2024-11-17 18:36:37.081399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486925 ]
00:15:50.583  [2024-11-17 18:36:37.082108] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:15:50.583  [2024-11-17 18:36:37.082244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486924 ]
00:15:50.583  EAL: No free 2048 kB hugepages reported on node 1
00:15:50.583  EAL: No free 2048 kB hugepages reported on node 1
00:15:50.583  EAL: No free 2048 kB hugepages reported on node 1
00:15:50.842  [2024-11-17 18:36:37.214709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:50.842  [2024-11-17 18:36:37.214750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:50.842  [2024-11-17 18:36:37.228701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:50.842  [2024-11-17 18:36:37.264657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:50.842  [2024-11-17 18:36:37.277422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:50.842  [2024-11-17 18:36:37.292690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:51.410   18:36:37 sma.sma_discovery -- sma/discovery.sh@156 -- # waitforlisten 486923 /var/tmp/spdk.sock1
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 486923 ']'
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock1
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...'
00:15:51.410  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:51.410   18:36:37 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:51.669   18:36:38 sma.sma_discovery -- sma/discovery.sh@157 -- # waitforlisten 486924 /var/tmp/spdk.sock2
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 486924 ']'
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:15:51.669  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:51.669   18:36:38 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.928  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:51.928  I0000 00:00:1731864998.250546  486926 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:51.928  [2024-11-17 18:36:38.261531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:51.928   18:36:38 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:51.928   18:36:38 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:51.928    18:36:38 sma.sma_discovery -- sma/discovery.sh@162 -- # uuidgen
00:15:51.929   18:36:38 sma.sma_discovery -- sma/discovery.sh@162 -- # t1uuid=1ab963a7-7702-420c-8e92-05dd280e2944
00:15:51.929    18:36:38 sma.sma_discovery -- sma/discovery.sh@163 -- # uuidgen
00:15:51.929   18:36:38 sma.sma_discovery -- sma/discovery.sh@163 -- # t2uuid=00a8bebe-9195-4984-8a1c-037f87ecc225
00:15:51.929    18:36:38 sma.sma_discovery -- sma/discovery.sh@164 -- # uuidgen
00:15:51.929   18:36:38 sma.sma_discovery -- sma/discovery.sh@164 -- # t2uuid2=09f7cd9f-e91d-4c0d-990a-282c495eb034
00:15:51.929   18:36:38 sma.sma_discovery -- sma/discovery.sh@166 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1
00:15:52.187  [2024-11-17 18:36:38.550640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:52.187  [2024-11-17 18:36:38.591006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:15:52.187  [2024-11-17 18:36:38.598931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:15:52.187  null0
00:15:52.187   18:36:38 sma.sma_discovery -- sma/discovery.sh@176 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:15:52.446  [2024-11-17 18:36:38.798120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:52.447  [2024-11-17 18:36:38.854449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:15:52.447  [2024-11-17 18:36:38.862432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8010 ***
00:15:52.447  [2024-11-17 18:36:38.870428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8011 ***
00:15:52.447  null0
00:15:52.447  null1
00:15:52.447   18:36:38 sma.sma_discovery -- sma/discovery.sh@190 -- # sma_waitforlisten
00:15:52.447   18:36:38 sma.sma_discovery -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:52.447   18:36:38 sma.sma_discovery -- sma/common.sh@8 -- # local sma_port=8080
00:15:52.447   18:36:38 sma.sma_discovery -- sma/common.sh@10 -- # (( i = 0 ))
00:15:52.447   18:36:38 sma.sma_discovery -- sma/common.sh@10 -- # (( i < 5 ))
00:15:52.447   18:36:38 sma.sma_discovery -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:52.447   18:36:38 sma.sma_discovery -- sma/common.sh@12 -- # return 0
00:15:52.447   18:36:38 sma.sma_discovery -- sma/discovery.sh@192 -- # localnqn=nqn.2016-06.io.spdk:local0
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@195 -- # create_device nqn.2016-06.io.spdk:local0
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@195 -- # jq -r .handle
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:15:52.447    18:36:38 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:52.705  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:52.705  I0000 00:00:1731864999.135854  487189 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:52.705  I0000 00:00:1731864999.137565  487189 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:52.705  [2024-11-17 18:36:39.160375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:15:52.705   18:36:39 sma.sma_discovery -- sma/discovery.sh@195 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:52.705   18:36:39 sma.sma_discovery -- sma/discovery.sh@198 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:52.964  [
00:15:52.964    {
00:15:52.964      "nqn": "nqn.2016-06.io.spdk:local0",
00:15:52.964      "subtype": "NVMe",
00:15:52.964      "listen_addresses": [
00:15:52.964        {
00:15:52.964          "trtype": "TCP",
00:15:52.964          "adrfam": "IPv4",
00:15:52.964          "traddr": "127.0.0.1",
00:15:52.964          "trsvcid": "4419"
00:15:52.964        }
00:15:52.964      ],
00:15:52.964      "allow_any_host": false,
00:15:52.964      "hosts": [],
00:15:52.964      "serial_number": "00000000000000000000",
00:15:52.964      "model_number": "SPDK bdev Controller",
00:15:52.964      "max_namespaces": 32,
00:15:52.964      "min_cntlid": 1,
00:15:52.964      "max_cntlid": 65519,
00:15:52.964      "namespaces": []
00:15:52.964    }
00:15:52.964  ]
00:15:52.964   18:36:39 sma.sma_discovery -- sma/discovery.sh@201 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1ab963a7-7702-420c-8e92-05dd280e2944 8009 8010
00:15:52.964   18:36:39 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:52.964   18:36:39 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:52.964   18:36:39 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:52.964    18:36:39 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 1ab963a7-7702-420c-8e92-05dd280e2944 8009 8010
00:15:52.964    18:36:39 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=1ab963a7-7702-420c-8e92-05dd280e2944
00:15:52.964    18:36:39 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:52.964    18:36:39 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:15:52.964     18:36:39 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:52.964     18:36:39 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:53.222  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:53.222  I0000 00:00:1731864999.650572  487413 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:53.222  I0000 00:00:1731864999.652572  487413 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:55.757  {}
00:15:55.757    18:36:41 sma.sma_discovery -- sma/discovery.sh@204 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:55.757    18:36:41 sma.sma_discovery -- sma/discovery.sh@204 -- # jq -r '. | length'
00:15:55.757   18:36:42 sma.sma_discovery -- sma/discovery.sh@204 -- # [[ 2 -eq 2 ]]
00:15:55.757   18:36:42 sma.sma_discovery -- sma/discovery.sh@206 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:55.757   18:36:42 sma.sma_discovery -- sma/discovery.sh@206 -- # jq -r '.[].trid.trsvcid'
00:15:55.757   18:36:42 sma.sma_discovery -- sma/discovery.sh@206 -- # grep 8009
00:15:55.757  8009
00:15:55.757   18:36:42 sma.sma_discovery -- sma/discovery.sh@207 -- # jq -r '.[].trid.trsvcid'
00:15:55.757   18:36:42 sma.sma_discovery -- sma/discovery.sh@207 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:55.757   18:36:42 sma.sma_discovery -- sma/discovery.sh@207 -- # grep 8010
00:15:56.016  8010
00:15:56.016    18:36:42 sma.sma_discovery -- sma/discovery.sh@210 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:56.016    18:36:42 sma.sma_discovery -- sma/discovery.sh@210 -- # jq -r '.[].namespaces | length'
00:15:56.275   18:36:42 sma.sma_discovery -- sma/discovery.sh@210 -- # [[ 1 -eq 1 ]]
00:15:56.275    18:36:42 sma.sma_discovery -- sma/discovery.sh@211 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:56.275    18:36:42 sma.sma_discovery -- sma/discovery.sh@211 -- # jq -r '.[].namespaces[0].uuid'
00:15:56.534   18:36:42 sma.sma_discovery -- sma/discovery.sh@211 -- # [[ 1ab963a7-7702-420c-8e92-05dd280e2944 == \1\a\b\9\6\3\a\7\-\7\7\0\2\-\4\2\0\c\-\8\e\9\2\-\0\5\d\d\2\8\0\e\2\9\4\4 ]]
00:15:56.534   18:36:42 sma.sma_discovery -- sma/discovery.sh@214 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 00a8bebe-9195-4984-8a1c-037f87ecc225 8010
00:15:56.534   18:36:42 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:56.534   18:36:42 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:56.534   18:36:42 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:56.534    18:36:42 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 00a8bebe-9195-4984-8a1c-037f87ecc225 8010
00:15:56.534    18:36:42 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=00a8bebe-9195-4984-8a1c-037f87ecc225
00:15:56.534    18:36:42 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:56.534    18:36:42 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:56.534     18:36:42 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:15:56.534     18:36:42 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:56.534     18:36:43 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:56.794  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:56.794  I0000 00:00:1731865003.333194  488076 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:56.794  I0000 00:00:1731865003.335271  488076 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:57.052  {}
00:15:57.053    18:36:43 sma.sma_discovery -- sma/discovery.sh@217 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:57.053    18:36:43 sma.sma_discovery -- sma/discovery.sh@217 -- # jq -r '. | length'
00:15:57.311   18:36:43 sma.sma_discovery -- sma/discovery.sh@217 -- # [[ 2 -eq 2 ]]
00:15:57.311    18:36:43 sma.sma_discovery -- sma/discovery.sh@218 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:57.311    18:36:43 sma.sma_discovery -- sma/discovery.sh@218 -- # jq -r '.[].namespaces | length'
00:15:57.311   18:36:43 sma.sma_discovery -- sma/discovery.sh@218 -- # [[ 2 -eq 2 ]]
00:15:57.311   18:36:43 sma.sma_discovery -- sma/discovery.sh@219 -- # jq -r '.[].namespaces[].uuid'
00:15:57.311   18:36:43 sma.sma_discovery -- sma/discovery.sh@219 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:57.311   18:36:43 sma.sma_discovery -- sma/discovery.sh@219 -- # grep 1ab963a7-7702-420c-8e92-05dd280e2944
00:15:57.570  1ab963a7-7702-420c-8e92-05dd280e2944
00:15:57.570   18:36:44 sma.sma_discovery -- sma/discovery.sh@220 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:57.570   18:36:44 sma.sma_discovery -- sma/discovery.sh@220 -- # jq -r '.[].namespaces[].uuid'
00:15:57.570   18:36:44 sma.sma_discovery -- sma/discovery.sh@220 -- # grep 00a8bebe-9195-4984-8a1c-037f87ecc225
00:15:57.830  00a8bebe-9195-4984-8a1c-037f87ecc225
00:15:57.830   18:36:44 sma.sma_discovery -- sma/discovery.sh@223 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1ab963a7-7702-420c-8e92-05dd280e2944
00:15:57.830   18:36:44 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:57.830    18:36:44 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:15:57.830    18:36:44 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:58.089  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:58.089  I0000 00:00:1731865004.538746  488317 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:58.089  I0000 00:00:1731865004.540543  488317 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:58.089  {}
00:15:58.089    18:36:44 sma.sma_discovery -- sma/discovery.sh@227 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:58.089    18:36:44 sma.sma_discovery -- sma/discovery.sh@227 -- # jq -r '. | length'
00:15:58.348   18:36:44 sma.sma_discovery -- sma/discovery.sh@227 -- # [[ 1 -eq 1 ]]
00:15:58.348   18:36:44 sma.sma_discovery -- sma/discovery.sh@228 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:58.348   18:36:44 sma.sma_discovery -- sma/discovery.sh@228 -- # jq -r '.[].trid.trsvcid'
00:15:58.348   18:36:44 sma.sma_discovery -- sma/discovery.sh@228 -- # grep 8010
00:15:58.607  8010
00:15:58.607    18:36:45 sma.sma_discovery -- sma/discovery.sh@230 -- # jq -r '.[].namespaces | length'
00:15:58.607    18:36:45 sma.sma_discovery -- sma/discovery.sh@230 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:58.866   18:36:45 sma.sma_discovery -- sma/discovery.sh@230 -- # [[ 1 -eq 1 ]]
00:15:58.866    18:36:45 sma.sma_discovery -- sma/discovery.sh@231 -- # jq -r '.[].namespaces[0].uuid'
00:15:58.866    18:36:45 sma.sma_discovery -- sma/discovery.sh@231 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:59.125   18:36:45 sma.sma_discovery -- sma/discovery.sh@231 -- # [[ 00a8bebe-9195-4984-8a1c-037f87ecc225 == \0\0\a\8\b\e\b\e\-\9\1\9\5\-\4\9\8\4\-\8\a\1\c\-\0\3\7\f\8\7\e\c\c\2\2\5 ]]
00:15:59.125   18:36:45 sma.sma_discovery -- sma/discovery.sh@234 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 00a8bebe-9195-4984-8a1c-037f87ecc225
00:15:59.125   18:36:45 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:59.125    18:36:45 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:15:59.125    18:36:45 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:59.384  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:59.384  I0000 00:00:1731865005.764681  488567 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:59.384  I0000 00:00:1731865005.766352  488567 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:59.384  {}
00:15:59.384    18:36:45 sma.sma_discovery -- sma/discovery.sh@237 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:59.384    18:36:45 sma.sma_discovery -- sma/discovery.sh@237 -- # jq -r '. | length'
00:15:59.643   18:36:46 sma.sma_discovery -- sma/discovery.sh@237 -- # [[ 0 -eq 0 ]]
00:15:59.643    18:36:46 sma.sma_discovery -- sma/discovery.sh@238 -- # jq -r '.[].namespaces | length'
00:15:59.643    18:36:46 sma.sma_discovery -- sma/discovery.sh@238 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:59.902   18:36:46 sma.sma_discovery -- sma/discovery.sh@238 -- # [[ 0 -eq 0 ]]
00:15:59.902    18:36:46 sma.sma_discovery -- sma/discovery.sh@241 -- # uuidgen
00:15:59.902   18:36:46 sma.sma_discovery -- sma/discovery.sh@241 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 6245b636-6efc-4d00-8ee7-393eaa7f1a48 8009
00:15:59.902   18:36:46 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:59.902   18:36:46 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 6245b636-6efc-4d00-8ee7-393eaa7f1a48 8009
00:15:59.902   18:36:46 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:15:59.902   18:36:46 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:59.902    18:36:46 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:15:59.902   18:36:46 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:59.902   18:36:46 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 6245b636-6efc-4d00-8ee7-393eaa7f1a48 8009
00:15:59.902   18:36:46 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:59.902   18:36:46 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:59.902   18:36:46 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:59.902    18:36:46 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 6245b636-6efc-4d00-8ee7-393eaa7f1a48 8009
00:15:59.902    18:36:46 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:15:59.902    18:36:46 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:59.902    18:36:46 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:15:59.902     18:36:46 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:59.902     18:36:46 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:00.161  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:00.161  I0000 00:00:1731865006.617996  488798 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:00.161  I0000 00:00:1731865006.620077  488798 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:01.539  [2024-11-17 18:36:47.704951] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:01.539  [2024-11-17 18:36:47.805181] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:01.539  [2024-11-17 18:36:47.905415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:01.539  [2024-11-17 18:36:48.005646] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:01.539  [2024-11-17 18:36:48.105877] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:01.798  [2024-11-17 18:36:48.206108] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:01.798  [2024-11-17 18:36:48.306340] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:02.057  [2024-11-17 18:36:48.406575] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:02.057  [2024-11-17 18:36:48.506806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:02.057  [2024-11-17 18:36:48.607040] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:02.315  [2024-11-17 18:36:48.707273] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6245b636-6efc-4d00-8ee7-393eaa7f1a48
00:16:02.315  [2024-11-17 18:36:48.707297] bdev.c:8396:_bdev_open_async: *ERROR*: Timed out while waiting for bdev '6245b636-6efc-4d00-8ee7-393eaa7f1a48' to appear
00:16:02.315  Traceback (most recent call last):
00:16:02.315    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:02.315      main(sys.argv[1:])
00:16:02.315    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:02.315      result = client.call(request['method'], request.get('params', {}))
00:16:02.315               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:02.315    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:02.315      response = func(request=json_format.ParseDict(params, input()))
00:16:02.315                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:02.316    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:02.316      return _end_unary_response_blocking(state, call, False, None)
00:16:02.316             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:02.316    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:02.316      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:02.316      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:02.316  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:02.316  	status = StatusCode.NOT_FOUND
00:16:02.316  	details = "Volume could not be found"
00:16:02.316  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-17T18:36:48.724333979+01:00", grpc_status:5, grpc_message:"Volume could not be found"}"
00:16:02.316  >
00:16:02.316   18:36:48 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:02.316   18:36:48 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:02.316   18:36:48 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:02.316   18:36:48 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:02.316    18:36:48 sma.sma_discovery -- sma/discovery.sh@242 -- # jq -r '. | length'
00:16:02.316    18:36:48 sma.sma_discovery -- sma/discovery.sh@242 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:02.574   18:36:48 sma.sma_discovery -- sma/discovery.sh@242 -- # [[ 0 -eq 0 ]]
00:16:02.574    18:36:48 sma.sma_discovery -- sma/discovery.sh@243 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:02.574    18:36:48 sma.sma_discovery -- sma/discovery.sh@243 -- # jq -r '.[].namespaces | length'
00:16:02.837   18:36:49 sma.sma_discovery -- sma/discovery.sh@243 -- # [[ 0 -eq 0 ]]
00:16:02.837   18:36:49 sma.sma_discovery -- sma/discovery.sh@246 -- # volumes=($t1uuid $t2uuid)
00:16:02.837   18:36:49 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:16:02.837   18:36:49 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1ab963a7-7702-420c-8e92-05dd280e2944 8009 8010
00:16:02.837   18:36:49 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:02.837   18:36:49 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:02.837   18:36:49 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:02.837    18:36:49 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 1ab963a7-7702-420c-8e92-05dd280e2944 8009 8010
00:16:02.837    18:36:49 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=1ab963a7-7702-420c-8e92-05dd280e2944
00:16:02.837    18:36:49 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:02.837    18:36:49 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:02.837     18:36:49 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:02.837     18:36:49 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:03.095  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:03.095  I0000 00:00:1731865009.487591  489251 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:03.095  I0000 00:00:1731865009.489513  489251 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:05.627  {}
00:16:05.627   18:36:51 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:16:05.627   18:36:51 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 00a8bebe-9195-4984-8a1c-037f87ecc225 8009 8010
00:16:05.627   18:36:51 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:05.627   18:36:51 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:05.627   18:36:51 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:05.627    18:36:51 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 00a8bebe-9195-4984-8a1c-037f87ecc225 8009 8010
00:16:05.627    18:36:51 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:05.627    18:36:51 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:05.627    18:36:51 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:05.627     18:36:51 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:05.627     18:36:51 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:05.628     18:36:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:05.628  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:05.628  I0000 00:00:1731865012.078696  489700 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:05.628  I0000 00:00:1731865012.080419  489700 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:05.628  {}
00:16:05.628    18:36:52 sma.sma_discovery -- sma/discovery.sh@251 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:05.628    18:36:52 sma.sma_discovery -- sma/discovery.sh@251 -- # jq -r '. | length'
00:16:05.887   18:36:52 sma.sma_discovery -- sma/discovery.sh@251 -- # [[ 2 -eq 2 ]]
00:16:05.887   18:36:52 sma.sma_discovery -- sma/discovery.sh@252 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:05.887   18:36:52 sma.sma_discovery -- sma/discovery.sh@252 -- # jq -r '.[].trid.trsvcid'
00:16:05.887   18:36:52 sma.sma_discovery -- sma/discovery.sh@252 -- # grep 8009
00:16:06.145  8009
00:16:06.145   18:36:52 sma.sma_discovery -- sma/discovery.sh@253 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:06.145   18:36:52 sma.sma_discovery -- sma/discovery.sh@253 -- # grep 8010
00:16:06.145   18:36:52 sma.sma_discovery -- sma/discovery.sh@253 -- # jq -r '.[].trid.trsvcid'
00:16:06.404  8010
00:16:06.404   18:36:52 sma.sma_discovery -- sma/discovery.sh@254 -- # jq -r '.[].namespaces[].uuid'
00:16:06.404   18:36:52 sma.sma_discovery -- sma/discovery.sh@254 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:06.404   18:36:52 sma.sma_discovery -- sma/discovery.sh@254 -- # grep 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:06.663  1ab963a7-7702-420c-8e92-05dd280e2944
00:16:06.663   18:36:52 sma.sma_discovery -- sma/discovery.sh@255 -- # jq -r '.[].namespaces[].uuid'
00:16:06.663   18:36:52 sma.sma_discovery -- sma/discovery.sh@255 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:06.663   18:36:52 sma.sma_discovery -- sma/discovery.sh@255 -- # grep 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:06.663  00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:06.663   18:36:53 sma.sma_discovery -- sma/discovery.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:06.663   18:36:53 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.663    18:36:53 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:06.663    18:36:53 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:07.231  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:07.231  I0000 00:00:1731865013.533148  489956 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:07.231  I0000 00:00:1731865013.534761  489956 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:07.231  {}
00:16:07.231    18:36:53 sma.sma_discovery -- sma/discovery.sh@260 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:07.231    18:36:53 sma.sma_discovery -- sma/discovery.sh@260 -- # jq -r '. | length'
00:16:07.231   18:36:53 sma.sma_discovery -- sma/discovery.sh@260 -- # [[ 2 -eq 2 ]]
00:16:07.231   18:36:53 sma.sma_discovery -- sma/discovery.sh@261 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:07.231   18:36:53 sma.sma_discovery -- sma/discovery.sh@261 -- # jq -r '.[].trid.trsvcid'
00:16:07.231   18:36:53 sma.sma_discovery -- sma/discovery.sh@261 -- # grep 8009
00:16:07.489  8009
00:16:07.489   18:36:53 sma.sma_discovery -- sma/discovery.sh@262 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:07.489   18:36:53 sma.sma_discovery -- sma/discovery.sh@262 -- # jq -r '.[].trid.trsvcid'
00:16:07.489   18:36:53 sma.sma_discovery -- sma/discovery.sh@262 -- # grep 8010
00:16:07.747  8010
00:16:07.747   18:36:54 sma.sma_discovery -- sma/discovery.sh@265 -- # NOT delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:07.747   18:36:54 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:07.747   18:36:54 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:07.747   18:36:54 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=delete_device
00:16:07.747   18:36:54 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:07.747    18:36:54 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t delete_device
00:16:07.747   18:36:54 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:07.747   18:36:54 sma.sma_discovery -- common/autotest_common.sh@655 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:07.747   18:36:54 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:08.005  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:08.005  I0000 00:00:1731865014.395017  490192 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:08.005  I0000 00:00:1731865014.397089  490192 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:08.005  Traceback (most recent call last):
00:16:08.005    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:08.005      main(sys.argv[1:])
00:16:08.005    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:08.005      result = client.call(request['method'], request.get('params', {}))
00:16:08.005               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.005    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:08.005      response = func(request=json_format.ParseDict(params, input()))
00:16:08.005                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.005    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:08.005      return _end_unary_response_blocking(state, call, False, None)
00:16:08.005             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.005    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:08.005      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:08.005      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.005  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:08.005  	status = StatusCode.FAILED_PRECONDITION
00:16:08.005  	details = "Device has attached volumes"
00:16:08.005  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-17T18:36:54.399166328+01:00", grpc_status:9, grpc_message:"Device has attached volumes"}"
00:16:08.005  >
00:16:08.005   18:36:54 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:08.005   18:36:54 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:08.005   18:36:54 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:08.005   18:36:54 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:08.005    18:36:54 sma.sma_discovery -- sma/discovery.sh@267 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:08.005    18:36:54 sma.sma_discovery -- sma/discovery.sh@267 -- # jq -r '. | length'
00:16:08.264   18:36:54 sma.sma_discovery -- sma/discovery.sh@267 -- # [[ 2 -eq 2 ]]
00:16:08.264   18:36:54 sma.sma_discovery -- sma/discovery.sh@268 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:08.264   18:36:54 sma.sma_discovery -- sma/discovery.sh@268 -- # jq -r '.[].trid.trsvcid'
00:16:08.264   18:36:54 sma.sma_discovery -- sma/discovery.sh@268 -- # grep 8009
00:16:08.522  8009
00:16:08.522   18:36:54 sma.sma_discovery -- sma/discovery.sh@269 -- # jq -r '.[].trid.trsvcid'
00:16:08.522   18:36:54 sma.sma_discovery -- sma/discovery.sh@269 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:08.522   18:36:54 sma.sma_discovery -- sma/discovery.sh@269 -- # grep 8010
00:16:08.522  8010
00:16:08.522   18:36:55 sma.sma_discovery -- sma/discovery.sh@272 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:08.522   18:36:55 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:08.522    18:36:55 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:08.522    18:36:55 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:08.779  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:08.779  I0000 00:00:1731865015.310778  490421 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:08.779  I0000 00:00:1731865015.312448  490421 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:09.036  {}
00:16:09.036   18:36:55 sma.sma_discovery -- sma/discovery.sh@273 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:09.036   18:36:55 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:09.036  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:09.036  I0000 00:00:1731865015.573706  490447 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:09.036  I0000 00:00:1731865015.575436  490447 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:09.036  {}
00:16:09.293    18:36:55 sma.sma_discovery -- sma/discovery.sh@275 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:09.293    18:36:55 sma.sma_discovery -- sma/discovery.sh@275 -- # jq -r '. | length'
00:16:09.293   18:36:55 sma.sma_discovery -- sma/discovery.sh@275 -- # [[ 0 -eq 0 ]]
00:16:09.293   18:36:55 sma.sma_discovery -- sma/discovery.sh@276 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:09.293    18:36:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:09.293    18:36:55 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:16:09.293   18:36:55 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:09.552  [2024-11-17 18:36:56.037814] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:local0' does not exist
00:16:09.552  request:
00:16:09.552  {
00:16:09.552    "nqn": "nqn.2016-06.io.spdk:local0",
00:16:09.552    "method": "nvmf_get_subsystems",
00:16:09.552    "req_id": 1
00:16:09.552  }
00:16:09.552  Got JSON-RPC error response
00:16:09.552  response:
00:16:09.552  {
00:16:09.552    "code": -19,
00:16:09.552    "message": "No such device"
00:16:09.552  }
00:16:09.552   18:36:56 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:09.552   18:36:56 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:09.552   18:36:56 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:09.552   18:36:56 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:09.552    18:36:56 sma.sma_discovery -- sma/discovery.sh@279 -- # jq -r .handle
00:16:09.552    18:36:56 sma.sma_discovery -- sma/discovery.sh@279 -- # create_device nqn.2016-06.io.spdk:local0 1ab963a7-7702-420c-8e92-05dd280e2944 8009
00:16:09.553    18:36:56 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:16:09.553    18:36:56 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=1ab963a7-7702-420c-8e92-05dd280e2944
00:16:09.553    18:36:56 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:16:09.553    18:36:56 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:16:09.553    18:36:56 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n 1ab963a7-7702-420c-8e92-05dd280e2944 ]]
00:16:09.553     18:36:56 sma.sma_discovery -- sma/discovery.sh@75 -- # format_volume 1ab963a7-7702-420c-8e92-05dd280e2944 8009
00:16:09.553     18:36:56 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=1ab963a7-7702-420c-8e92-05dd280e2944
00:16:09.553     18:36:56 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:09.553     18:36:56 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:09.553      18:36:56 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:09.553      18:36:56 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:09.553    18:36:56 sma.sma_discovery -- sma/discovery.sh@75 -- # volume='"volume": {
00:16:09.553  "volume_id": "Grljp3cCQgyOkgXdKA4pRA==",
00:16:09.553  "nvmf": {
00:16:09.553  "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:09.553  "discovery": {
00:16:09.553  "discovery_endpoints": [
00:16:09.553  {
00:16:09.553  "trtype": "tcp",
00:16:09.553  "traddr": "127.0.0.1",
00:16:09.553  "trsvcid": "8009"
00:16:09.553  }
00:16:09.553  ]
00:16:09.553  }
00:16:09.553  }
00:16:09.553  },'
00:16:09.553    18:36:56 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:09.812  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:09.812  I0000 00:00:1731865016.304277  490685 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:09.812  I0000 00:00:1731865016.306115  490685 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:11.189  [2024-11-17 18:36:57.424246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:16:11.189   18:36:57 sma.sma_discovery -- sma/discovery.sh@279 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:11.189    18:36:57 sma.sma_discovery -- sma/discovery.sh@282 -- # jq -r '. | length'
00:16:11.190    18:36:57 sma.sma_discovery -- sma/discovery.sh@282 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:11.190   18:36:57 sma.sma_discovery -- sma/discovery.sh@282 -- # [[ 1 -eq 1 ]]
00:16:11.190   18:36:57 sma.sma_discovery -- sma/discovery.sh@283 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:11.190   18:36:57 sma.sma_discovery -- sma/discovery.sh@283 -- # grep 8009
00:16:11.190   18:36:57 sma.sma_discovery -- sma/discovery.sh@283 -- # jq -r '.[].trid.trsvcid'
00:16:11.449  8009
00:16:11.449    18:36:57 sma.sma_discovery -- sma/discovery.sh@284 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:11.449    18:36:57 sma.sma_discovery -- sma/discovery.sh@284 -- # jq -r '.[].namespaces | length'
00:16:11.708   18:36:58 sma.sma_discovery -- sma/discovery.sh@284 -- # [[ 1 -eq 1 ]]
00:16:11.708    18:36:58 sma.sma_discovery -- sma/discovery.sh@285 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:11.708    18:36:58 sma.sma_discovery -- sma/discovery.sh@285 -- # jq -r '.[].namespaces[0].uuid'
00:16:11.966   18:36:58 sma.sma_discovery -- sma/discovery.sh@285 -- # [[ 1ab963a7-7702-420c-8e92-05dd280e2944 == \1\a\b\9\6\3\a\7\-\7\7\0\2\-\4\2\0\c\-\8\e\9\2\-\0\5\d\d\2\8\0\e\2\9\4\4 ]]
00:16:11.967   18:36:58 sma.sma_discovery -- sma/discovery.sh@288 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:11.967   18:36:58 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:11.967    18:36:58 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:11.967    18:36:58 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:12.225  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:12.225  I0000 00:00:1731865018.602279  491132 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:12.225  I0000 00:00:1731865018.604225  491132 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:12.225  {}
00:16:12.225    18:36:58 sma.sma_discovery -- sma/discovery.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:12.225    18:36:58 sma.sma_discovery -- sma/discovery.sh@290 -- # jq -r '. | length'
00:16:12.484   18:36:58 sma.sma_discovery -- sma/discovery.sh@290 -- # [[ 0 -eq 0 ]]
00:16:12.484    18:36:58 sma.sma_discovery -- sma/discovery.sh@291 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:12.484    18:36:58 sma.sma_discovery -- sma/discovery.sh@291 -- # jq -r '.[].namespaces | length'
00:16:12.743   18:36:59 sma.sma_discovery -- sma/discovery.sh@291 -- # [[ 0 -eq 0 ]]
00:16:12.743   18:36:59 sma.sma_discovery -- sma/discovery.sh@294 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 00a8bebe-9195-4984-8a1c-037f87ecc225 8010 8011
00:16:12.743   18:36:59 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:12.743   18:36:59 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:12.743   18:36:59 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:12.743    18:36:59 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 00a8bebe-9195-4984-8a1c-037f87ecc225 8010 8011
00:16:12.743    18:36:59 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:12.743    18:36:59 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:12.743    18:36:59 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:12.743     18:36:59 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 8011
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010' '8011')
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:12.743     18:36:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:13.001  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:13.001  I0000 00:00:1731865019.363583  491171 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:13.001  I0000 00:00:1731865019.365281  491171 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:13.937  {}
00:16:14.196    18:37:00 sma.sma_discovery -- sma/discovery.sh@297 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:14.196    18:37:00 sma.sma_discovery -- sma/discovery.sh@297 -- # jq -r '. | length'
00:16:14.454   18:37:00 sma.sma_discovery -- sma/discovery.sh@297 -- # [[ 1 -eq 1 ]]
00:16:14.454    18:37:00 sma.sma_discovery -- sma/discovery.sh@298 -- # jq -r '.[].namespaces | length'
00:16:14.454    18:37:00 sma.sma_discovery -- sma/discovery.sh@298 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:14.712   18:37:01 sma.sma_discovery -- sma/discovery.sh@298 -- # [[ 1 -eq 1 ]]
00:16:14.713    18:37:01 sma.sma_discovery -- sma/discovery.sh@299 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:14.713    18:37:01 sma.sma_discovery -- sma/discovery.sh@299 -- # jq -r '.[].namespaces[0].uuid'
00:16:14.713   18:37:01 sma.sma_discovery -- sma/discovery.sh@299 -- # [[ 00a8bebe-9195-4984-8a1c-037f87ecc225 == \0\0\a\8\b\e\b\e\-\9\1\9\5\-\4\9\8\4\-\8\a\1\c\-\0\3\7\f\8\7\e\c\c\2\2\5 ]]
00:16:14.713   18:37:01 sma.sma_discovery -- sma/discovery.sh@302 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 09f7cd9f-e91d-4c0d-990a-282c495eb034 8011
00:16:14.713   18:37:01 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:14.713   18:37:01 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:14.713   18:37:01 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:14.713    18:37:01 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 09f7cd9f-e91d-4c0d-990a-282c495eb034 8011
00:16:14.713    18:37:01 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:14.713    18:37:01 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:14.713    18:37:01 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:14.713     18:37:01 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:14.713     18:37:01 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8011
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8011')
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:14.972     18:37:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:15.231  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:15.231  I0000 00:00:1731865021.577110  491618 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:15.231  I0000 00:00:1731865021.578964  491618 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:15.231  {}
00:16:15.231    18:37:01 sma.sma_discovery -- sma/discovery.sh@305 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:15.231    18:37:01 sma.sma_discovery -- sma/discovery.sh@305 -- # jq -r '. | length'
00:16:15.489   18:37:01 sma.sma_discovery -- sma/discovery.sh@305 -- # [[ 1 -eq 1 ]]
00:16:15.489    18:37:01 sma.sma_discovery -- sma/discovery.sh@306 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:15.489    18:37:01 sma.sma_discovery -- sma/discovery.sh@306 -- # jq -r '.[].namespaces | length'
00:16:15.489   18:37:02 sma.sma_discovery -- sma/discovery.sh@306 -- # [[ 2 -eq 2 ]]
00:16:15.489   18:37:02 sma.sma_discovery -- sma/discovery.sh@307 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:15.489   18:37:02 sma.sma_discovery -- sma/discovery.sh@307 -- # jq -r '.[].namespaces[].uuid'
00:16:15.489   18:37:02 sma.sma_discovery -- sma/discovery.sh@307 -- # grep 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:15.748  00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:15.748   18:37:02 sma.sma_discovery -- sma/discovery.sh@308 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:15.748   18:37:02 sma.sma_discovery -- sma/discovery.sh@308 -- # jq -r '.[].namespaces[].uuid'
00:16:15.748   18:37:02 sma.sma_discovery -- sma/discovery.sh@308 -- # grep 09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:16.007  09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:16.007   18:37:02 sma.sma_discovery -- sma/discovery.sh@311 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:16.007   18:37:02 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:16.007    18:37:02 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:16.007    18:37:02 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:16.266  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:16.266  I0000 00:00:1731865022.704652  491865 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:16.266  I0000 00:00:1731865022.706397  491865 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:16.266  [2024-11-17 18:37:02.709752] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:16.266  {}
00:16:16.266   18:37:02 sma.sma_discovery -- sma/discovery.sh@312 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:16.266   18:37:02 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:16.266    18:37:02 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:16.266    18:37:02 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:16.525  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:16.525  I0000 00:00:1731865023.036884  491932 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:16.525  I0000 00:00:1731865023.038556  491932 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:16.525  {}
00:16:16.525   18:37:03 sma.sma_discovery -- sma/discovery.sh@313 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:16.525   18:37:03 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:16.525    18:37:03 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:16.525    18:37:03 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:17.093  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:17.093  I0000 00:00:1731865023.379211  492113 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:17.093  I0000 00:00:1731865023.380844  492113 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:17.093  {}
00:16:17.093   18:37:03 sma.sma_discovery -- sma/discovery.sh@314 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:17.093   18:37:03 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:17.093  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:17.093  I0000 00:00:1731865023.622313  492136 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:17.093  I0000 00:00:1731865023.623759  492136 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:17.093  {}
00:16:17.093    18:37:03 sma.sma_discovery -- sma/discovery.sh@315 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:17.093    18:37:03 sma.sma_discovery -- sma/discovery.sh@315 -- # jq -r '. | length'
00:16:17.352   18:37:03 sma.sma_discovery -- sma/discovery.sh@315 -- # [[ 0 -eq 0 ]]
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@317 -- # create_device nqn.2016-06.io.spdk:local0
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@317 -- # jq -r .handle
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:16:17.352    18:37:03 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:17.612  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:17.612  I0000 00:00:1731865024.106241  492172 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:17.612  I0000 00:00:1731865024.108009  492172 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:17.612  [2024-11-17 18:37:04.130228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:16:17.612   18:37:04 sma.sma_discovery -- sma/discovery.sh@317 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:17.612   18:37:04 sma.sma_discovery -- sma/discovery.sh@320 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:17.612    18:37:04 sma.sma_discovery -- sma/discovery.sh@320 -- # uuid2base64 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:17.612    18:37:04 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:17.871  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:17.871  I0000 00:00:1731865024.386475  492389 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:17.871  I0000 00:00:1731865024.388111  492389 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:19.248  {}
00:16:19.248    18:37:05 sma.sma_discovery -- sma/discovery.sh@345 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:19.248    18:37:05 sma.sma_discovery -- sma/discovery.sh@345 -- # jq -r '. | length'
00:16:19.248   18:37:05 sma.sma_discovery -- sma/discovery.sh@345 -- # [[ 1 -eq 1 ]]
00:16:19.248   18:37:05 sma.sma_discovery -- sma/discovery.sh@346 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:19.248   18:37:05 sma.sma_discovery -- sma/discovery.sh@346 -- # grep 8009
00:16:19.248   18:37:05 sma.sma_discovery -- sma/discovery.sh@346 -- # jq -r '.[].trid.trsvcid'
00:16:19.506  8009
00:16:19.506    18:37:05 sma.sma_discovery -- sma/discovery.sh@347 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:19.506    18:37:05 sma.sma_discovery -- sma/discovery.sh@347 -- # jq -r '.[].namespaces | length'
00:16:19.765   18:37:06 sma.sma_discovery -- sma/discovery.sh@347 -- # [[ 1 -eq 1 ]]
00:16:19.765    18:37:06 sma.sma_discovery -- sma/discovery.sh@348 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:19.765    18:37:06 sma.sma_discovery -- sma/discovery.sh@348 -- # jq -r '.[].namespaces[0].uuid'
00:16:20.024   18:37:06 sma.sma_discovery -- sma/discovery.sh@348 -- # [[ 1ab963a7-7702-420c-8e92-05dd280e2944 == \1\a\b\9\6\3\a\7\-\7\7\0\2\-\4\2\0\c\-\8\e\9\2\-\0\5\d\d\2\8\0\e\2\9\4\4 ]]
00:16:20.024   18:37:06 sma.sma_discovery -- sma/discovery.sh@351 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.024    18:37:06 sma.sma_discovery -- sma/discovery.sh@351 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:20.024    18:37:06 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:20.024    18:37:06 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:20.024    18:37:06 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:20.024   18:37:06 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.283  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:20.283  I0000 00:00:1731865026.665917  492839 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:20.283  I0000 00:00:1731865026.667488  492839 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:21.220  Traceback (most recent call last):
00:16:21.220    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:21.220      main(sys.argv[1:])
00:16:21.220    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:21.220      result = client.call(request['method'], request.get('params', {}))
00:16:21.220               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.220    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:21.220      response = func(request=json_format.ParseDict(params, input()))
00:16:21.220                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.220    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:21.220      return _end_unary_response_blocking(state, call, False, None)
00:16:21.220             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.220    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:21.220      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:21.220      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.220  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:21.220  	status = StatusCode.INVALID_ARGUMENT
00:16:21.220  	details = "Unexpected subsystem NQN"
00:16:21.220  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Unexpected subsystem NQN", grpc_status:3, created_time:"2024-11-17T18:37:07.787526715+01:00"}"
00:16:21.220  >
00:16:21.479   18:37:07 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:21.479   18:37:07 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:21.479   18:37:07 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:21.479   18:37:07 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:21.479    18:37:07 sma.sma_discovery -- sma/discovery.sh@377 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:21.479    18:37:07 sma.sma_discovery -- sma/discovery.sh@377 -- # jq -r '. | length'
00:16:21.737   18:37:08 sma.sma_discovery -- sma/discovery.sh@377 -- # [[ 1 -eq 1 ]]
00:16:21.738   18:37:08 sma.sma_discovery -- sma/discovery.sh@378 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:21.738   18:37:08 sma.sma_discovery -- sma/discovery.sh@378 -- # jq -r '.[].trid.trsvcid'
00:16:21.738   18:37:08 sma.sma_discovery -- sma/discovery.sh@378 -- # grep 8009
00:16:21.738  8009
00:16:21.738    18:37:08 sma.sma_discovery -- sma/discovery.sh@379 -- # jq -r '.[].namespaces | length'
00:16:21.738    18:37:08 sma.sma_discovery -- sma/discovery.sh@379 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:21.997   18:37:08 sma.sma_discovery -- sma/discovery.sh@379 -- # [[ 1 -eq 1 ]]
00:16:21.997    18:37:08 sma.sma_discovery -- sma/discovery.sh@380 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:21.997    18:37:08 sma.sma_discovery -- sma/discovery.sh@380 -- # jq -r '.[].namespaces[0].uuid'
00:16:22.256   18:37:08 sma.sma_discovery -- sma/discovery.sh@380 -- # [[ 1ab963a7-7702-420c-8e92-05dd280e2944 == \1\a\b\9\6\3\a\7\-\7\7\0\2\-\4\2\0\c\-\8\e\9\2\-\0\5\d\d\2\8\0\e\2\9\4\4 ]]
00:16:22.256   18:37:08 sma.sma_discovery -- sma/discovery.sh@383 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.256    18:37:08 sma.sma_discovery -- sma/discovery.sh@383 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:22.256    18:37:08 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:22.256    18:37:08 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:22.256    18:37:08 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:22.256   18:37:08 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.516  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:22.516  I0000 00:00:1731865028.938749  493209 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:22.516  I0000 00:00:1731865028.940545  493209 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:27.785  [2024-11-17 18:37:13.966368] bdev_nvme.c:7477:discovery_poller: *ERROR*: Discovery[127.0.0.1:8010] timed out while attaching NVM ctrlrs
00:16:27.785  Traceback (most recent call last):
00:16:27.785    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:27.785      main(sys.argv[1:])
00:16:27.785    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:27.785      result = client.call(request['method'], request.get('params', {}))
00:16:27.785               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:27.785    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:27.785      response = func(request=json_format.ParseDict(params, input()))
00:16:27.785                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:27.785    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:27.785      return _end_unary_response_blocking(state, call, False, None)
00:16:27.785             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:27.785    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:27.785      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:27.785      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:27.785  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:27.785  	status = StatusCode.INTERNAL
00:16:27.785  	details = "Failed to start discovery"
00:16:27.785  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Failed to start discovery", grpc_status:13, created_time:"2024-11-17T18:37:13.969157073+01:00"}"
00:16:27.785  >
00:16:27.785   18:37:14 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:27.786   18:37:14 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:27.786   18:37:14 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:27.786   18:37:14 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:27.786    18:37:14 sma.sma_discovery -- sma/discovery.sh@408 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:27.786    18:37:14 sma.sma_discovery -- sma/discovery.sh@408 -- # jq -r '. | length'
00:16:27.786   18:37:14 sma.sma_discovery -- sma/discovery.sh@408 -- # [[ 1 -eq 1 ]]
00:16:27.786   18:37:14 sma.sma_discovery -- sma/discovery.sh@409 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:27.786   18:37:14 sma.sma_discovery -- sma/discovery.sh@409 -- # grep 8009
00:16:27.786   18:37:14 sma.sma_discovery -- sma/discovery.sh@409 -- # jq -r '.[].trid.trsvcid'
00:16:28.044  8009
00:16:28.044    18:37:14 sma.sma_discovery -- sma/discovery.sh@410 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:28.044    18:37:14 sma.sma_discovery -- sma/discovery.sh@410 -- # jq -r '.[].namespaces | length'
00:16:28.303   18:37:14 sma.sma_discovery -- sma/discovery.sh@410 -- # [[ 1 -eq 1 ]]
00:16:28.303    18:37:14 sma.sma_discovery -- sma/discovery.sh@411 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:28.303    18:37:14 sma.sma_discovery -- sma/discovery.sh@411 -- # jq -r '.[].namespaces[0].uuid'
00:16:28.561   18:37:14 sma.sma_discovery -- sma/discovery.sh@411 -- # [[ 1ab963a7-7702-420c-8e92-05dd280e2944 == \1\a\b\9\6\3\a\7\-\7\7\0\2\-\4\2\0\c\-\8\e\9\2\-\0\5\d\d\2\8\0\e\2\9\4\4 ]]
00:16:28.561    18:37:14 sma.sma_discovery -- sma/discovery.sh@414 -- # uuidgen
00:16:28.561   18:37:14 sma.sma_discovery -- sma/discovery.sh@414 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 0eed0679-ed10-4e07-ad73-7410834dec1d 8008
00:16:28.561   18:37:14 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:28.561   18:37:14 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 0eed0679-ed10-4e07-ad73-7410834dec1d 8008
00:16:28.561   18:37:14 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:28.561   18:37:14 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:28.561    18:37:14 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:28.561   18:37:14 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:28.561   18:37:14 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 0eed0679-ed10-4e07-ad73-7410834dec1d 8008
00:16:28.561   18:37:14 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:28.561   18:37:14 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:28.561   18:37:14 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:28.561    18:37:14 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 0eed0679-ed10-4e07-ad73-7410834dec1d 8008
00:16:28.561    18:37:14 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=0eed0679-ed10-4e07-ad73-7410834dec1d
00:16:28.561    18:37:14 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:28.561    18:37:14 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:28.561     18:37:14 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 0eed0679-ed10-4e07-ad73-7410834dec1d
00:16:28.561     18:37:14 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8008
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8008')
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:28.562     18:37:14 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:28.820  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:28.820  I0000 00:00:1731865035.142203  494369 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:28.820  I0000 00:00:1731865035.143923  494369 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:29.772  [2024-11-17 18:37:16.158695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:29.772  [2024-11-17 18:37:16.158761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e300 with addr=127.0.0.1, port=8008
00:16:29.772  [2024-11-17 18:37:16.158792] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:29.772  [2024-11-17 18:37:16.158808] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:29.772  [2024-11-17 18:37:16.158821] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:30.709  [2024-11-17 18:37:17.160984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:30.709  [2024-11-17 18:37:17.161034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e580 with addr=127.0.0.1, port=8008
00:16:30.709  [2024-11-17 18:37:17.161056] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:30.709  [2024-11-17 18:37:17.161068] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:30.709  [2024-11-17 18:37:17.161080] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:31.645  [2024-11-17 18:37:18.163306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:31.645  [2024-11-17 18:37:18.163338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e800 with addr=127.0.0.1, port=8008
00:16:31.645  [2024-11-17 18:37:18.163373] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:31.645  [2024-11-17 18:37:18.163385] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:31.645  [2024-11-17 18:37:18.163396] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:33.022  [2024-11-17 18:37:19.165600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:33.022  [2024-11-17 18:37:19.165633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024ea80 with addr=127.0.0.1, port=8008
00:16:33.022  [2024-11-17 18:37:19.165668] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:33.022  [2024-11-17 18:37:19.165680] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:33.022  [2024-11-17 18:37:19.165691] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:33.958  [2024-11-17 18:37:20.167823] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] timed out while attaching discovery ctrlr
00:16:33.958  Traceback (most recent call last):
00:16:33.958    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:33.958      main(sys.argv[1:])
00:16:33.958    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:33.958      result = client.call(request['method'], request.get('params', {}))
00:16:33.958               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:33.958    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:33.958      response = func(request=json_format.ParseDict(params, input()))
00:16:33.958                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:33.958    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:33.958      return _end_unary_response_blocking(state, call, False, None)
00:16:33.958             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:33.958    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:33.958      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:33.958      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:33.958  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:33.958  	status = StatusCode.INTERNAL
00:16:33.958  	details = "Failed to start discovery"
00:16:33.958  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-17T18:37:20.172414665+01:00", grpc_status:13, grpc_message:"Failed to start discovery"}"
00:16:33.958  >
00:16:33.958   18:37:20 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:33.958   18:37:20 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:33.958   18:37:20 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:33.958   18:37:20 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:33.958    18:37:20 sma.sma_discovery -- sma/discovery.sh@415 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:33.958    18:37:20 sma.sma_discovery -- sma/discovery.sh@415 -- # jq -r '. | length'
00:16:33.958   18:37:20 sma.sma_discovery -- sma/discovery.sh@415 -- # [[ 1 -eq 1 ]]
00:16:33.958   18:37:20 sma.sma_discovery -- sma/discovery.sh@416 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:33.958   18:37:20 sma.sma_discovery -- sma/discovery.sh@416 -- # jq -r '.[].trid.trsvcid'
00:16:33.958   18:37:20 sma.sma_discovery -- sma/discovery.sh@416 -- # grep 8009
00:16:34.217  8009
00:16:34.217   18:37:20 sma.sma_discovery -- sma/discovery.sh@420 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node1 1
00:16:34.476   18:37:20 sma.sma_discovery -- sma/discovery.sh@422 -- # sleep 2
00:16:34.735  WARNING:spdk.sma.volume.volume:Found disconnected volume: 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:36.640    18:37:22 sma.sma_discovery -- sma/discovery.sh@423 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:36.640    18:37:22 sma.sma_discovery -- sma/discovery.sh@423 -- # jq -r '. | length'
00:16:36.900   18:37:23 sma.sma_discovery -- sma/discovery.sh@423 -- # [[ 0 -eq 0 ]]
00:16:36.900   18:37:23 sma.sma_discovery -- sma/discovery.sh@424 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node1 1ab963a7-7702-420c-8e92-05dd280e2944
00:16:36.900   18:37:23 sma.sma_discovery -- sma/discovery.sh@428 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 00a8bebe-9195-4984-8a1c-037f87ecc225 8010
00:16:36.900   18:37:23 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:36.900   18:37:23 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:36.900   18:37:23 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:36.900    18:37:23 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 00a8bebe-9195-4984-8a1c-037f87ecc225 8010
00:16:36.900    18:37:23 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:36.900    18:37:23 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:36.900    18:37:23 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:36.900     18:37:23 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:36.900     18:37:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:37.159  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:37.159  I0000 00:00:1731865043.705976  495857 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:37.159  I0000 00:00:1731865043.707613  495857 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:38.536  {}
00:16:38.536   18:37:24 sma.sma_discovery -- sma/discovery.sh@429 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 09f7cd9f-e91d-4c0d-990a-282c495eb034 8010
00:16:38.536   18:37:24 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:38.536   18:37:24 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:38.536   18:37:24 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.536    18:37:24 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 09f7cd9f-e91d-4c0d-990a-282c495eb034 8010
00:16:38.536    18:37:24 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:38.536    18:37:24 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:38.536    18:37:24 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:38.536     18:37:24 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:38.536     18:37:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:38.795  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:38.795  I0000 00:00:1731865045.219936  496093 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:38.795  I0000 00:00:1731865045.221817  496093 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:38.795  {}
00:16:38.795    18:37:25 sma.sma_discovery -- sma/discovery.sh@430 -- # jq -r '.[].namespaces | length'
00:16:38.795    18:37:25 sma.sma_discovery -- sma/discovery.sh@430 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:39.054   18:37:25 sma.sma_discovery -- sma/discovery.sh@430 -- # [[ 2 -eq 2 ]]
00:16:39.054    18:37:25 sma.sma_discovery -- sma/discovery.sh@431 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:39.054    18:37:25 sma.sma_discovery -- sma/discovery.sh@431 -- # jq -r '. | length'
00:16:39.313   18:37:25 sma.sma_discovery -- sma/discovery.sh@431 -- # [[ 1 -eq 1 ]]
00:16:39.313   18:37:25 sma.sma_discovery -- sma/discovery.sh@432 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 2
00:16:39.572   18:37:25 sma.sma_discovery -- sma/discovery.sh@434 -- # sleep 2
00:16:40.509  WARNING:spdk.sma.volume.volume:Found disconnected volume: 09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:41.448    18:37:27 sma.sma_discovery -- sma/discovery.sh@436 -- # jq -r '.[].namespaces | length'
00:16:41.448    18:37:27 sma.sma_discovery -- sma/discovery.sh@436 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:41.706   18:37:28 sma.sma_discovery -- sma/discovery.sh@436 -- # [[ 1 -eq 1 ]]
00:16:41.707    18:37:28 sma.sma_discovery -- sma/discovery.sh@437 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:41.707    18:37:28 sma.sma_discovery -- sma/discovery.sh@437 -- # jq -r '. | length'
00:16:41.965   18:37:28 sma.sma_discovery -- sma/discovery.sh@437 -- # [[ 1 -eq 1 ]]
00:16:41.965   18:37:28 sma.sma_discovery -- sma/discovery.sh@438 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 1
00:16:42.224   18:37:28 sma.sma_discovery -- sma/discovery.sh@440 -- # sleep 2
00:16:42.483  WARNING:spdk.sma.volume.volume:Found disconnected volume: 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:44.387    18:37:30 sma.sma_discovery -- sma/discovery.sh@442 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:44.387    18:37:30 sma.sma_discovery -- sma/discovery.sh@442 -- # jq -r '.[].namespaces | length'
00:16:44.387   18:37:30 sma.sma_discovery -- sma/discovery.sh@442 -- # [[ 0 -eq 0 ]]
00:16:44.387    18:37:30 sma.sma_discovery -- sma/discovery.sh@443 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:44.387    18:37:30 sma.sma_discovery -- sma/discovery.sh@443 -- # jq -r '. | length'
00:16:44.645   18:37:31 sma.sma_discovery -- sma/discovery.sh@443 -- # [[ 0 -eq 0 ]]
00:16:44.645   18:37:31 sma.sma_discovery -- sma/discovery.sh@444 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 00a8bebe-9195-4984-8a1c-037f87ecc225
00:16:44.904   18:37:31 sma.sma_discovery -- sma/discovery.sh@445 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 09f7cd9f-e91d-4c0d-990a-282c495eb034
00:16:44.904   18:37:31 sma.sma_discovery -- sma/discovery.sh@447 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:44.904   18:37:31 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:45.163  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:45.163  I0000 00:00:1731865051.653422  497381 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:45.163  I0000 00:00:1731865051.655132  497381 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:45.163  {}
00:16:45.163   18:37:31 sma.sma_discovery -- sma/discovery.sh@449 -- # cleanup
00:16:45.163   18:37:31 sma.sma_discovery -- sma/discovery.sh@27 -- # killprocess 486926
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 486926 ']'
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 486926
00:16:45.163    18:37:31 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:45.163    18:37:31 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486926
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=python3
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486926'
00:16:45.163  killing process with pid 486926
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 486926
00:16:45.163   18:37:31 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 486926
00:16:45.423   18:37:31 sma.sma_discovery -- sma/discovery.sh@28 -- # killprocess 486925
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 486925 ']'
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 486925
00:16:45.423    18:37:31 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:45.423    18:37:31 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486925
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486925'
00:16:45.423  killing process with pid 486925
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 486925
00:16:45.423   18:37:31 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 486925
00:16:45.682   18:37:32 sma.sma_discovery -- sma/discovery.sh@29 -- # killprocess 486923
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 486923 ']'
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 486923
00:16:45.682    18:37:32 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:45.682    18:37:32 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486923
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486923'
00:16:45.682  killing process with pid 486923
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 486923
00:16:45.682   18:37:32 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 486923
00:16:46.251   18:37:32 sma.sma_discovery -- sma/discovery.sh@30 -- # killprocess 486924
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 486924 ']'
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 486924
00:16:46.251    18:37:32 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:46.251    18:37:32 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486924
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486924'
00:16:46.251  killing process with pid 486924
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 486924
00:16:46.251   18:37:32 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 486924
00:16:46.818   18:37:33 sma.sma_discovery -- sma/discovery.sh@450 -- # trap - SIGINT SIGTERM EXIT
00:16:46.818  
00:16:46.818  real	0m56.241s
00:16:46.818  user	3m5.810s
00:16:46.818  sys	0m7.633s
00:16:46.818   18:37:33 sma.sma_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:46.818   18:37:33 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:16:46.818  ************************************
00:16:46.818  END TEST sma_discovery
00:16:46.818  ************************************
00:16:46.818   18:37:33 sma -- sma/sma.sh@15 -- # run_test sma_vhost /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:46.818   18:37:33 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:46.818   18:37:33 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:46.818   18:37:33 sma -- common/autotest_common.sh@10 -- # set +x
00:16:46.818  ************************************
00:16:46.818  START TEST sma_vhost
00:16:46.818  ************************************
00:16:46.818   18:37:33 sma.sma_vhost -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:46.818  * Looking for test storage...
00:16:46.818  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:46.818    18:37:33 sma.sma_vhost -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:16:46.818     18:37:33 sma.sma_vhost -- common/autotest_common.sh@1693 -- # lcov --version
00:16:46.818     18:37:33 sma.sma_vhost -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:16:46.818    18:37:33 sma.sma_vhost -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@336 -- # IFS=.-:
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@336 -- # read -ra ver1
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@337 -- # IFS=.-:
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@337 -- # read -ra ver2
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@338 -- # local 'op=<'
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@340 -- # ver1_l=2
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@341 -- # ver2_l=1
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@344 -- # case "$op" in
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@345 -- # : 1
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@365 -- # decimal 1
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@353 -- # local d=1
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@355 -- # echo 1
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@365 -- # ver1[v]=1
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@366 -- # decimal 2
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@353 -- # local d=2
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:46.818     18:37:33 sma.sma_vhost -- scripts/common.sh@355 -- # echo 2
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@366 -- # ver2[v]=2
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:46.818    18:37:33 sma.sma_vhost -- scripts/common.sh@368 -- # return 0
00:16:46.818    18:37:33 sma.sma_vhost -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:46.818    18:37:33 sma.sma_vhost -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:16:46.818  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:46.818  		--rc genhtml_branch_coverage=1
00:16:46.818  		--rc genhtml_function_coverage=1
00:16:46.819  		--rc genhtml_legend=1
00:16:46.819  		--rc geninfo_all_blocks=1
00:16:46.819  		--rc geninfo_unexecuted_blocks=1
00:16:46.819  		
00:16:46.819  		'
00:16:46.819    18:37:33 sma.sma_vhost -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:16:46.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:46.819  		--rc genhtml_branch_coverage=1
00:16:46.819  		--rc genhtml_function_coverage=1
00:16:46.819  		--rc genhtml_legend=1
00:16:46.819  		--rc geninfo_all_blocks=1
00:16:46.819  		--rc geninfo_unexecuted_blocks=1
00:16:46.819  		
00:16:46.819  		'
00:16:46.819    18:37:33 sma.sma_vhost -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:16:46.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:46.819  		--rc genhtml_branch_coverage=1
00:16:46.819  		--rc genhtml_function_coverage=1
00:16:46.819  		--rc genhtml_legend=1
00:16:46.819  		--rc geninfo_all_blocks=1
00:16:46.819  		--rc geninfo_unexecuted_blocks=1
00:16:46.819  		
00:16:46.819  		'
00:16:46.819    18:37:33 sma.sma_vhost -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:16:46.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:46.819  		--rc genhtml_branch_coverage=1
00:16:46.819  		--rc genhtml_function_coverage=1
00:16:46.819  		--rc genhtml_legend=1
00:16:46.819  		--rc geninfo_all_blocks=1
00:16:46.819  		--rc geninfo_unexecuted_blocks=1
00:16:46.819  		
00:16:46.819  		'
00:16:46.819   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@6 -- # : false
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@7 -- # : /root/vhost_test
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@9 -- # : qemu-img
00:16:46.819     18:37:33 sma.sma_vhost -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:16:46.819      18:37:33 sma.sma_vhost -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:46.819     18:37:33 sma.sma_vhost -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@2 -- # vhost_0_main_core=0
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:16:46.819     18:37:33 sma.sma_vhost -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:16:46.819    18:37:33 sma.sma_vhost -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:16:46.819     18:37:33 sma.sma_vhost -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:16:46.819     18:37:33 sma.sma_vhost -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:16:46.819     18:37:33 sma.sma_vhost -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:16:46.819     18:37:33 sma.sma_vhost -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:16:46.819     18:37:33 sma.sma_vhost -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:16:46.819     18:37:33 sma.sma_vhost -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:16:46.819      18:37:33 sma.sma_vhost -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:16:46.819       18:37:33 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # check_cgroup
00:16:46.819       18:37:33 sma.sma_vhost -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:16:46.819       18:37:33 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:16:46.819       18:37:33 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # echo 2
00:16:46.819      18:37:33 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:16:46.819   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:46.819   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:16:46.819   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@49 -- # vm_no=0
00:16:46.819   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@50 -- # bus_size=32
00:16:46.819   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@52 -- # timing_enter setup_vm
00:16:46.819   18:37:33 sma.sma_vhost -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:46.819   18:37:33 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:46.819   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@54 -- # vm_setup --force=0 --disk-type=virtio '--qemu-args=-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@518 -- # xtrace_disable
00:16:46.819   18:37:33 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:46.819  INFO: Creating new VM in /root/vhost_test/vms/0
00:16:46.819  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:16:46.819  INFO: TASK MASK: 1-2
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@671 -- # local node_num=0
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@672 -- # local boot_disk_present=false
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:16:46.819  INFO: NUMA NODE: 0
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@677 -- # [[ -n '' ]]
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@686 -- # [[ -z '' ]]
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@701 -- # IFS=,
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@701 -- # read -r disk disk_type _
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@702 -- # [[ -z '' ]]
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@702 -- # disk_type=virtio
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@704 -- # case $disk_type in
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:46.819  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:16:46.819   18:37:33 sma.sma_vhost -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:16:47.392  1024+0 records in
00:16:47.392  1024+0 records out
00:16:47.392  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.457268 s, 2.3 GB/s
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@780 -- # [[ -n '' ]]
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@785 -- # (( 1 ))
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:16:47.392  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@787 -- # cat
00:16:47.392    18:37:33 sma.sma_vhost -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@827 -- # echo 10000
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@828 -- # echo 10001
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@829 -- # echo 10002
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@832 -- # [[ -z '' ]]
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@834 -- # echo 10004
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@835 -- # echo 100
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@837 -- # [[ -z '' ]]
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@838 -- # [[ -z '' ]]
00:16:47.392   18:37:33 sma.sma_vhost -- sma/vhost_blk.sh@59 -- # vm_run 0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@843 -- # local run_all=false
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@844 -- # local vms_to_run=
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@846 -- # getopts a-: optchar
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@856 -- # false
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@859 -- # shift 0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@860 -- # for vm in "$@"
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@871 -- # vm_is_running 0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@373 -- # return 1
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:16:47.392  INFO: running /root/vhost_test/vms/0/run.sh
00:16:47.392   18:37:33 sma.sma_vhost -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:16:47.392  Running VM in /root/vhost_test/vms/0
00:16:47.693  Waiting for QEMU pid file
00:16:48.755  === qemu.log ===
00:16:48.755  === qemu.log ===
00:16:48.755   18:37:35 sma.sma_vhost -- sma/vhost_blk.sh@60 -- # vm_wait_for_boot 300 0
00:16:48.755   18:37:35 sma.sma_vhost -- vhost/common.sh@913 -- # assert_number 300
00:16:48.755   18:37:35 sma.sma_vhost -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:16:48.755   18:37:35 sma.sma_vhost -- vhost/common.sh@281 -- # return 0
00:16:48.755   18:37:35 sma.sma_vhost -- vhost/common.sh@915 -- # xtrace_disable
00:16:48.755   18:37:35 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:48.755  INFO: Waiting for VMs to boot
00:16:48.755  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:17:10.742  
00:17:10.742  INFO: VM0 ready
00:17:10.742  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:10.742  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:10.742  INFO: all VMs ready
00:17:10.742   18:37:56 sma.sma_vhost -- vhost/common.sh@973 -- # return 0
00:17:10.742   18:37:56 sma.sma_vhost -- sma/vhost_blk.sh@61 -- # timing_exit setup_vm
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:10.742   18:37:56 sma.sma_vhost -- sma/vhost_blk.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/vhost -S /var/tmp -m 0x3 --wait-for-rpc
00:17:10.742   18:37:56 sma.sma_vhost -- sma/vhost_blk.sh@64 -- # vhostpid=501937
00:17:10.742   18:37:56 sma.sma_vhost -- sma/vhost_blk.sh@66 -- # waitforlisten 501937
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@835 -- # '[' -z 501937 ']'
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:10.742  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:10.742   18:37:56 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:10.742  [2024-11-17 18:37:57.039974] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:17:10.742  [2024-11-17 18:37:57.040099] [ DPDK EAL parameters: vhost --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501937 ]
00:17:10.742  EAL: No free 2048 kB hugepages reported on node 1
00:17:10.742  [2024-11-17 18:37:57.172337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:17:10.742  [2024-11-17 18:37:57.213168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:10.742  [2024-11-17 18:37:57.213206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:11.310   18:37:57 sma.sma_vhost -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:11.310   18:37:57 sma.sma_vhost -- common/autotest_common.sh@868 -- # return 0
00:17:11.310   18:37:57 sma.sma_vhost -- sma/vhost_blk.sh@69 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:17:11.310   18:37:57 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.310   18:37:57 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:11.310   18:37:57 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.310   18:37:57 sma.sma_vhost -- sma/vhost_blk.sh@70 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:17:11.310   18:37:57 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.310   18:37:57 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:11.569  [2024-11-17 18:37:57.887712] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:17:11.569   18:37:57 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.569   18:37:57 sma.sma_vhost -- sma/vhost_blk.sh@71 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:17:11.569   18:37:57 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.569   18:37:57 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:11.569  [2024-11-17 18:37:57.895694] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:17:11.569   18:37:57 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.570   18:37:57 sma.sma_vhost -- sma/vhost_blk.sh@72 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:17:11.570   18:37:57 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.570   18:37:57 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:11.570  [2024-11-17 18:37:57.903686] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:17:11.570   18:37:57 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.570   18:37:57 sma.sma_vhost -- sma/vhost_blk.sh@73 -- # rpc_cmd framework_start_init
00:17:11.570   18:37:57 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.570   18:37:57 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:11.570  [2024-11-17 18:37:57.966369] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:17:11.570   18:37:58 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.570   18:37:58 sma.sma_vhost -- sma/vhost_blk.sh@93 -- # smapid=502146
00:17:11.570   18:37:58 sma.sma_vhost -- sma/vhost_blk.sh@96 -- # sma_waitforlisten
00:17:11.570   18:37:58 sma.sma_vhost -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:11.570   18:37:58 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:11.570   18:37:58 sma.sma_vhost -- sma/common.sh@8 -- # local sma_port=8080
00:17:11.570    18:37:58 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # cat
00:17:11.570   18:37:58 sma.sma_vhost -- sma/common.sh@10 -- # (( i = 0 ))
00:17:11.570   18:37:58 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:17:11.570   18:37:58 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:11.570   18:37:58 sma.sma_vhost -- sma/common.sh@14 -- # sleep 1s
00:17:11.828  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:11.828  I0000 00:00:1731865078.245554  502146 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:12.769   18:37:59 sma.sma_vhost -- sma/common.sh@10 -- # (( i++ ))
00:17:12.769   18:37:59 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:17:12.769   18:37:59 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:12.769   18:37:59 sma.sma_vhost -- sma/common.sh@12 -- # return 0
00:17:12.769    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:12.769    18:37:59 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:12.769    18:37:59 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:12.769    18:37:59 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:12.769    18:37:59 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:12.769    18:37:59 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:12.769     18:37:59 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:12.769     18:37:59 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:12.769     18:37:59 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:12.769     18:37:59 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:12.769     18:37:59 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:12.769     18:37:59 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:12.769    18:37:59 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:12.769  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:12.769   18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # [[ 0 -eq 0 ]]
00:17:12.769   18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@102 -- # rpc_cmd bdev_null_create null0 100 4096
00:17:12.769   18:37:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.769   18:37:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:12.769  null0
00:17:12.769   18:37:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.769   18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@103 -- # rpc_cmd bdev_null_create null1 100 4096
00:17:12.769   18:37:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.769   18:37:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:12.769  null1
00:17:12.769   18:37:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.769    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # jq -r '.[].uuid'
00:17:12.769    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:12.769    18:37:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.769    18:37:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:13.028    18:37:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:13.028   18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # uuid=85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:13.028    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # jq -r '.[].uuid'
00:17:13.028    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # rpc_cmd bdev_get_bdevs -b null1
00:17:13.028    18:37:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:13.028    18:37:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:13.028    18:37:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:13.028   18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # uuid2=252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:13.028    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # jq -r .handle
00:17:13.028    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # create_device 0 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:13.028    18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:13.028     18:37:59 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:13.028     18:37:59 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:13.287  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:13.287  I0000 00:00:1731865079.660930  502396 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:13.287  I0000 00:00:1731865079.662668  502396 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:13.287  I0000 00:00:1731865079.664143  502405 subchannel.cc:806] subchannel 0x563f3d649280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563f3d5cb880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563f3d78acf0, grpc.internal.client_channel_call_destination=0x7f8e76de8390, grpc.internal.event_engine=0x563f3d2457d0, grpc.internal.security_connector=0x563f3d5a7a50, grpc.internal.subchannel_pool=0x563f3d7bc4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563f3d7bf890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:37:59.663593734+01:00"}), backing off for 1000 ms
00:17:13.287  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 221
00:17:13.287  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:225
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:226
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:14.224  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:14.224   18:38:00 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # devid0=virtio_blk:sma-0
00:17:14.224   18:38:00 sma.sma_vhost -- sma/vhost_blk.sh@109 -- # rpc_cmd vhost_get_controllers -n sma-0
00:17:14.224   18:38:00 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:14.224   18:38:00 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:14.224  [
00:17:14.224  {
00:17:14.224  "ctrlr": "sma-0",
00:17:14.224  "cpumask": "0x3",
00:17:14.224  "delay_base_us": 0,
00:17:14.224  "iops_threshold": 60000,
00:17:14.224  "socket": "/var/tmp/sma-0",
00:17:14.224  "sessions": [
00:17:14.224  {
00:17:14.224  "vid": 0,
00:17:14.224  "id": 0,
00:17:14.224  "name": "sma-0s0",
00:17:14.224  "started": false,
00:17:14.224  "max_queues": 0,
00:17:14.224  "inflight_task_cnt": 0
00:17:14.224  }
00:17:14.224  ],
00:17:14.224  "backend_specific": {
00:17:14.224  "block": {
00:17:14.224  "readonly": false,
00:17:14.224  "bdev": "null0",
00:17:14.224  "transport": "vhost_user_blk"
00:17:14.224  }
00:17:14.224  }
00:17:14.224  }
00:17:14.224  ]
00:17:14.224   18:38:00 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:14.224    18:38:00 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # create_device 1 252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:14.224    18:38:00 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # jq -r .handle
00:17:14.224    18:38:00 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:14.224     18:38:00 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:14.224     18:38:00 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 227
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:225
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:14.483  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f9343e00000
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f515a800000
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f515a800000
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:228
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:229
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:14.484  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:14.743  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:14.743  I0000 00:00:1731865081.071309  502637 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:14.743  I0000 00:00:1731865081.073053  502637 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:14.743  I0000 00:00:1731865081.074508  502834 subchannel.cc:806] subchannel 0x5578492ab280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55784922d880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5578493eccf0, grpc.internal.client_channel_call_destination=0x7f19b0adc390, grpc.internal.event_engine=0x557848ea77d0, grpc.internal.security_connector=0x557849213aa0, grpc.internal.subchannel_pool=0x55784941e4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557849421890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:01.074001314+01:00"}), backing off for 999 ms
00:17:14.743  VHOST_CONFIG: (/var/tmp/sma-1) vhost-user server: socket created, fd: 232
00:17:14.743  VHOST_CONFIG: (/var/tmp/sma-1) binding succeeded
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) new vhost user connection is 230
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) new device, handle is 1
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Vhost-user protocol features: 0x11ebf
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_QUEUE_NUM
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_OWNER
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:234
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:235
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:17:15.310  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_CONFIG
00:17:15.569   18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # devid1=virtio_blk:sma-1
00:17:15.569   18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@112 -- # rpc_cmd vhost_get_controllers -n sma-0
00:17:15.569   18:38:01 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:15.569   18:38:01 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:15.569  [
00:17:15.569  {
00:17:15.569  "ctrlr": "sma-0",
00:17:15.569  "cpumask": "0x3",
00:17:15.569  "delay_base_us": 0,
00:17:15.569  "iops_threshold": 60000,
00:17:15.569  "socket": "/var/tmp/sma-0",
00:17:15.569  "sessions": [
00:17:15.569  {
00:17:15.569  "vid": 0,
00:17:15.569  "id": 0,
00:17:15.569  "name": "sma-0s0",
00:17:15.569  "started": true,
00:17:15.569  "max_queues": 2,
00:17:15.569  "inflight_task_cnt": 0
00:17:15.569  }
00:17:15.569  ],
00:17:15.569  "backend_specific": {
00:17:15.569  "block": {
00:17:15.569  "readonly": false,
00:17:15.569  "bdev": "null0",
00:17:15.569  "transport": "vhost_user_blk"
00:17:15.569  }
00:17:15.569  }
00:17:15.569  }
00:17:15.569  ]
00:17:15.569   18:38:01 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:15.569   18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@113 -- # rpc_cmd vhost_get_controllers -n sma-1
00:17:15.569   18:38:01 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:15.569   18:38:01 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:15.569  [
00:17:15.569  {
00:17:15.569  "ctrlr": "sma-1",
00:17:15.569  "cpumask": "0x3",
00:17:15.569  "delay_base_us": 0,
00:17:15.569  "iops_threshold": 60000,
00:17:15.569  "socket": "/var/tmp/sma-1",
00:17:15.569  "sessions": [
00:17:15.569  {
00:17:15.569  "vid": 1,
00:17:15.569  "id": 0,
00:17:15.569  "name": "sma-1s1",
00:17:15.569  "started": false,
00:17:15.569  "max_queues": 0,
00:17:15.569  "inflight_task_cnt": 0
00:17:15.569  }
00:17:15.569  ],
00:17:15.569  "backend_specific": {
00:17:15.569  "block": {
00:17:15.569  "readonly": false,
00:17:15.569  "bdev": "null1",
00:17:15.569  "transport": "vhost_user_blk"
00:17:15.569  }
00:17:15.569  }
00:17:15.569  }
00:17:15.569  ]
00:17:15.569   18:38:01 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:15.569   18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@114 -- # [[ virtio_blk:sma-0 != \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:17:15.569    18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # rpc_cmd vhost_get_controllers
00:17:15.569    18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # jq -r '. | length'
00:17:15.569    18:38:01 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:15.569    18:38:01 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000008):
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_INFLIGHT_FD
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd num_queues: 2
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd queue_size: 128
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_size: 4224
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_offset: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) send inflight fd: 231
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_INFLIGHT_FD
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_size: 4224
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_offset: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd num_queues: 2
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd queue_size: 128
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd fd: 236
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd pervq_inflight_size: 2112
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:231
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:234
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_MEM_TABLE
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) guest memory region size: 0x40000000
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest physical addr: 0x0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest virtual  addr: 0x7f9343e00000
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	 host  virtual  addr: 0x7f511a800000
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap addr : 0x7f511a800000
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap size : 0x40000000
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap align: 0x200000
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap off  : 0x0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:0 file:237
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:1 file:239
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 1
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x0000000f):
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 1
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 1
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 1
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:17:15.569  VHOST_CONFIG: (/var/tmp/sma-1) virtio is now ready for processing.
00:17:15.569    18:38:01 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:15.569   18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # [[ 2 -eq 2 ]]
00:17:15.569    18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # jq -r .handle
00:17:15.569    18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # create_device 0 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:15.569    18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:15.569     18:38:01 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:15.569     18:38:01 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:15.828  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:15.828  I0000 00:00:1731865082.251076  502872 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:15.828  I0000 00:00:1731865082.252989  502872 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:15.828  I0000 00:00:1731865082.254486  503031 subchannel.cc:806] subchannel 0x560680598280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56068051a880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5606806d9cf0, grpc.internal.client_channel_call_destination=0x7fb758b6c390, grpc.internal.event_engine=0x5606801947d0, grpc.internal.security_connector=0x5606804f6a50, grpc.internal.subchannel_pool=0x56068070b4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56068070e890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:02.253976683+01:00"}), backing off for 999 ms
00:17:15.828   18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # tmp0=virtio_blk:sma-0
00:17:15.828    18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # create_device 1 252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:15.828    18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # jq -r .handle
00:17:15.828    18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:15.828     18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:15.828     18:38:02 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:16.087  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:16.087  I0000 00:00:1731865082.621488  503093 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:16.087  I0000 00:00:1731865082.623297  503093 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:16.087  I0000 00:00:1731865082.624786  503105 subchannel.cc:806] subchannel 0x56401a489280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56401a40b880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56401a5cacf0, grpc.internal.client_channel_call_destination=0x7f5396afb390, grpc.internal.event_engine=0x56401a0857d0, grpc.internal.security_connector=0x56401a3f1aa0, grpc.internal.subchannel_pool=0x56401a5fc4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56401a5ff890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:02.624275012+01:00"}), backing off for 1000 ms
00:17:16.346   18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # tmp1=virtio_blk:sma-1
00:17:16.346   18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # NOT create_device 1 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:16.346   18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # jq -r .handle
00:17:16.346   18:38:02 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:16.346   18:38:02 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 1 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:16.346   18:38:02 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=create_device
00:17:16.346   18:38:02 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:16.346    18:38:02 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t create_device
00:17:16.346   18:38:02 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:16.346   18:38:02 sma.sma_vhost -- common/autotest_common.sh@655 -- # create_device 1 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:16.346   18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:16.346    18:38:02 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:16.346    18:38:02 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:16.605  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:16.605  I0000 00:00:1731865082.931080  503128 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:16.605  I0000 00:00:1731865082.932609  503128 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:16.605  I0000 00:00:1731865082.933815  503131 subchannel.cc:806] subchannel 0x56091cb21280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56091caa3880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56091cc62cf0, grpc.internal.client_channel_call_destination=0x7ffa941c0390, grpc.internal.event_engine=0x56091c71d7d0, grpc.internal.security_connector=0x56091ca89aa0, grpc.internal.subchannel_pool=0x56091cc944f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56091cc97890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:02.933377135+01:00"}), backing off for 1000 ms
00:17:16.605  Traceback (most recent call last):
00:17:16.605    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:16.605      main(sys.argv[1:])
00:17:16.605    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:16.605      result = client.call(request['method'], request.get('params', {}))
00:17:16.605               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.605    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:16.605      response = func(request=json_format.ParseDict(params, input()))
00:17:16.605                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.605    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:16.605      return _end_unary_response_blocking(state, call, False, None)
00:17:16.605             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.605    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:16.605      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:16.605      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.605  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:16.605  	status = StatusCode.INTERNAL
00:17:16.605  	details = "Failed to create vhost device"
00:17:16.606  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Failed to create vhost device", grpc_status:13, created_time:"2024-11-17T18:38:02.982183975+01:00"}"
00:17:16.606  >
00:17:16.606   18:38:03 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:16.606   18:38:03 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:16.606   18:38:03 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:16.606   18:38:03 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:16.606    18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:16.606    18:38:03 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:16.606    18:38:03 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:16.606    18:38:03 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:16.606    18:38:03 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:16.606    18:38:03 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:16.606     18:38:03 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:16.606     18:38:03 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:16.606     18:38:03 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:16.606     18:38:03 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:16.606     18:38:03 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:16.606     18:38:03 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:16.606    18:38:03 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:16.606  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:16.865   18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # [[ 2 -eq 2 ]]
00:17:16.865    18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # rpc_cmd vhost_get_controllers
00:17:16.865    18:38:03 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:16.865    18:38:03 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:16.865    18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # jq -r '. | length'
00:17:16.865    18:38:03 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:16.865   18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # [[ 2 -eq 2 ]]
00:17:16.865   18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@131 -- # [[ virtio_blk:sma-0 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\0 ]]
00:17:16.865   18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@132 -- # [[ virtio_blk:sma-1 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:17:16.865   18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@135 -- # delete_device virtio_blk:sma-0
00:17:16.865   18:38:03 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:17.124  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:17.124  I0000 00:00:1731865083.519425  503165 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:17.124  I0000 00:00:1731865083.521148  503165 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:17.124  I0000 00:00:1731865083.522540  503359 subchannel.cc:806] subchannel 0x55f29f806280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f29f788880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f29f947cf0, grpc.internal.client_channel_call_destination=0x7f8a77eaf390, grpc.internal.event_engine=0x55f29f4027d0, grpc.internal.security_connector=0x55f29f764a50, grpc.internal.subchannel_pool=0x55f29f9794f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f29f97c890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:03.52207618+01:00"}), backing off for 999 ms
00:17:17.692  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:17.692  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:17.692  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:17.692  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:1
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:49
00:17:17.693  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:17.693  {}
00:17:17.693   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@136 -- # NOT rpc_cmd vhost_get_controllers -n sma-0
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-0
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:17.693    18:38:04 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-0
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:17.693  request:
00:17:17.693  {
00:17:17.693  "name": "sma-0",
00:17:17.693  "method": "vhost_get_controllers",
00:17:17.693  "req_id": 1
00:17:17.693  }
00:17:17.693  Got JSON-RPC error response
00:17:17.693  response:
00:17:17.693  {
00:17:17.693  "code": -32603,
00:17:17.693  "message": "No such device"
00:17:17.693  }
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:17.693   18:38:04 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:17.693    18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # rpc_cmd vhost_get_controllers
00:17:17.693    18:38:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:17.693    18:38:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:17.693    18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # jq -r '. | length'
00:17:17.693    18:38:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:17.952   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # [[ 1 -eq 1 ]]
00:17:17.952   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@139 -- # delete_device virtio_blk:sma-1
00:17:17.952   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:17.952  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:17.952  I0000 00:00:1731865084.481748  503393 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:17.952  I0000 00:00:1731865084.483618  503393 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:17.952  I0000 00:00:1731865084.484950  503397 subchannel.cc:806] subchannel 0x55dec93ea280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55dec936c880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55dec952bcf0, grpc.internal.client_channel_call_destination=0x7ffa64b27390, grpc.internal.event_engine=0x55dec8fe67d0, grpc.internal.security_connector=0x55dec9348a50, grpc.internal.subchannel_pool=0x55dec955d4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55dec9560890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:04.484449072+01:00"}), backing off for 1000 ms
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000000):
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 1
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 0
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 0
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 1
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 file:49
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:17:17.952  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 file:1
00:17:18.211  VHOST_CONFIG: (/var/tmp/sma-1) vhost peer closed
00:17:18.211  {}
00:17:18.211   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@140 -- # NOT rpc_cmd vhost_get_controllers -n sma-1
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-1
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:18.211    18:38:04 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-1
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:18.211  request:
00:17:18.211  {
00:17:18.211  "name": "sma-1",
00:17:18.211  "method": "vhost_get_controllers",
00:17:18.211  "req_id": 1
00:17:18.211  }
00:17:18.211  Got JSON-RPC error response
00:17:18.211  response:
00:17:18.211  {
00:17:18.211  "code": -32603,
00:17:18.211  "message": "No such device"
00:17:18.211  }
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:18.211   18:38:04 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:18.211    18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # rpc_cmd vhost_get_controllers
00:17:18.211    18:38:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:18.211    18:38:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:18.211    18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # jq -r '. | length'
00:17:18.211    18:38:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:18.211   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # [[ 0 -eq 0 ]]
00:17:18.211   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@144 -- # delete_device virtio_blk:sma-0
00:17:18.211   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:18.470  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:18.470  I0000 00:00:1731865084.893816  503616 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:18.470  I0000 00:00:1731865084.895418  503616 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:18.470  I0000 00:00:1731865084.896858  503617 subchannel.cc:806] subchannel 0x55ecd8aa0280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ecd8a22880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ecd8be1cf0, grpc.internal.client_channel_call_destination=0x7f6d64233390, grpc.internal.event_engine=0x55ecd869c7d0, grpc.internal.security_connector=0x55ecd89fea50, grpc.internal.subchannel_pool=0x55ecd8c134f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ecd8c16890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:04.896310486+01:00"}), backing off for 1000 ms
00:17:18.470  {}
00:17:18.470   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@145 -- # delete_device virtio_blk:sma-1
00:17:18.470   18:38:04 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:18.729  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:18.729  I0000 00:00:1731865085.125930  503637 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:18.729  I0000 00:00:1731865085.127539  503637 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:18.730  I0000 00:00:1731865085.128815  503642 subchannel.cc:806] subchannel 0x561137b44280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561137ac6880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561137c85cf0, grpc.internal.client_channel_call_destination=0x7f8c366cf390, grpc.internal.event_engine=0x5611377407d0, grpc.internal.security_connector=0x561137aa2a50, grpc.internal.subchannel_pool=0x561137cb74f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561137cba890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:05.128370781+01:00"}), backing off for 1000 ms
00:17:18.730  {}
00:17:18.730    18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:18.730    18:38:05 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:18.730    18:38:05 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:18.730    18:38:05 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:18.730    18:38:05 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:18.730    18:38:05 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:18.730     18:38:05 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:18.730     18:38:05 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:18.730     18:38:05 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:18.730     18:38:05 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:18.730     18:38:05 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:18.730     18:38:05 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:18.730    18:38:05 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:18.730  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:18.989   18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # [[ 0 -eq 0 ]]
00:17:18.989   18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@150 -- # devids=()
00:17:18.989    18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # jq -r '.[].uuid'
00:17:18.989    18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:18.989    18:38:05 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:18.989    18:38:05 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:18.989    18:38:05 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:18.989   18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # uuid=85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:18.989    18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # jq -r .handle
00:17:18.989    18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # create_device 0 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:18.989    18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:18.989     18:38:05 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:18.989     18:38:05 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:19.248  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:19.248  I0000 00:00:1731865085.714284  503678 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:19.248  I0000 00:00:1731865085.715993  503678 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:19.248  I0000 00:00:1731865085.717385  503727 subchannel.cc:806] subchannel 0x556e2ce89280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556e2ce0b880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556e2cfcacf0, grpc.internal.client_channel_call_destination=0x7f2c6a386390, grpc.internal.event_engine=0x556e2ca857d0, grpc.internal.security_connector=0x556e2cde7a50, grpc.internal.subchannel_pool=0x556e2cffc4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556e2cfff890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:05.716946161+01:00"}), backing off for 999 ms
00:17:19.248  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 221
00:17:19.248  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:225
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:226
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:20.185   18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # devids[0]=virtio_blk:sma-0
00:17:20.185    18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # rpc_cmd bdev_get_bdevs -b null1
00:17:20.185    18:38:06 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:20.185    18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # jq -r '.[].uuid'
00:17:20.185    18:38:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:20.185    18:38:06 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:20.185   18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # uuid=252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:20.185    18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # create_device 32 252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:20.185    18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # jq -r .handle
00:17:20.185    18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:20.185     18:38:06 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 252dba04-ee81-4239-8ab6-00a3dc7be0ad
00:17:20.185     18:38:06 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 227
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:225
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f9343e00000
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f515a800000
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f515a800000
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:228
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:20.185  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:229
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:20.186  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:20.444  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:20.444  I0000 00:00:1731865086.959324  503915 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:20.444  I0000 00:00:1731865086.961040  503915 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:20.444  I0000 00:00:1731865086.962463  504020 subchannel.cc:806] subchannel 0x55ba8d0f8280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ba8d07a880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ba8d239cf0, grpc.internal.client_channel_call_destination=0x7f0423323390, grpc.internal.event_engine=0x55ba8ccf47d0, grpc.internal.security_connector=0x55ba8d060aa0, grpc.internal.subchannel_pool=0x55ba8d26b4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ba8d26e890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:06.961973592+01:00"}), backing off for 999 ms
00:17:20.444  VHOST_CONFIG: (/var/tmp/sma-32) vhost-user server: socket created, fd: 232
00:17:20.444  VHOST_CONFIG: (/var/tmp/sma-32) binding succeeded
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) new vhost user connection is 230
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) new device, handle is 1
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Vhost-user protocol features: 0x11ebf
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_QUEUE_NUM
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_OWNER
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:234
00:17:21.381  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:235
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_CONFIG
00:17:21.382   18:38:07 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # devids[1]=virtio_blk:sma-32
00:17:21.382    18:38:07 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:21.382    18:38:07 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:21.382    18:38:07 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:21.382    18:38:07 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:21.382    18:38:07 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:21.382    18:38:07 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:21.382     18:38:07 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:21.382     18:38:07 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:21.382     18:38:07 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:21.382     18:38:07 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:21.382     18:38:07 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:21.382     18:38:07 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:21.382    18:38:07 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:21.382  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000008):
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_INFLIGHT_FD
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd num_queues: 2
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd queue_size: 128
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_size: 4224
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_offset: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) send inflight fd: 231
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_INFLIGHT_FD
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_size: 4224
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_offset: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd num_queues: 2
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd queue_size: 128
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd fd: 236
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd pervq_inflight_size: 2112
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:231
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:234
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_MEM_TABLE
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) guest memory region size: 0x40000000
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest physical addr: 0x0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest virtual  addr: 0x7f9343e00000
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	 host  virtual  addr: 0x7f511a800000
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap addr : 0x7f511a800000
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap size : 0x40000000
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap align: 0x200000
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap off  : 0x0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:0 file:237
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:1 file:238
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 1
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x0000000f):
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 1
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 1
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 1
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:17:21.382  VHOST_CONFIG: (/var/tmp/sma-32) virtio is now ready for processing.
00:17:21.952   18:38:08 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # [[ 2 -eq 2 ]]
00:17:21.952   18:38:08 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:17:21.952   18:38:08 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-0
00:17:21.952   18:38:08 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:22.211  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:22.211  I0000 00:00:1731865088.732553  504350 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:22.211  I0000 00:00:1731865088.734308  504350 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:22.211  I0000 00:00:1731865088.735699  504357 subchannel.cc:806] subchannel 0x55642784b280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5564277cd880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55642798ccf0, grpc.internal.client_channel_call_destination=0x7fb449fe6390, grpc.internal.event_engine=0x5564274477d0, grpc.internal.security_connector=0x5564277a9a50, grpc.internal.subchannel_pool=0x5564279be4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5564279c1890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:08.735180008+01:00"}), backing off for 1000 ms
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:22.777  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:22.778  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:22.778  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:22.778  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:22.778  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:49
00:17:22.778  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:22.778  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:1
00:17:22.778  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:22.778  {}
00:17:23.036   18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:17:23.036   18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-32
00:17:23.036   18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:23.036  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:23.036  I0000 00:00:1731865089.581627  504492 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:23.036  I0000 00:00:1731865089.583183  504492 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:23.036  I0000 00:00:1731865089.584388  504574 subchannel.cc:806] subchannel 0x558578cf2280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558578c74880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558578e33cf0, grpc.internal.client_channel_call_destination=0x7f0048f3b390, grpc.internal.event_engine=0x5585788ee7d0, grpc.internal.security_connector=0x558578c50a50, grpc.internal.subchannel_pool=0x558578e654f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558578e68890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:09.583990727+01:00"}), backing off for 999 ms
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000000):
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 1
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 0
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 0
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 1
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 file:44
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 file:6
00:17:23.294  VHOST_CONFIG: (/var/tmp/sma-32) vhost peer closed
00:17:23.294  {}
00:17:23.294    18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:23.294    18:38:09 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:23.294    18:38:09 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:23.294    18:38:09 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:23.294    18:38:09 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:23.294    18:38:09 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:23.294     18:38:09 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:23.294     18:38:09 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:23.294     18:38:09 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:23.294     18:38:09 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:23.294     18:38:09 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:23.294     18:38:09 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:23.295    18:38:09 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:23.295  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:23.553   18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # [[ 0 -eq 0 ]]
00:17:23.553   18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@168 -- # key0=1234567890abcdef1234567890abcdef
00:17:23.553   18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@169 -- # rpc_cmd bdev_malloc_create -b malloc0 32 4096
00:17:23.553   18:38:09 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.553   18:38:09 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:23.553  malloc0
00:17:23.553   18:38:09 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.553    18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # rpc_cmd bdev_get_bdevs -b malloc0
00:17:23.553    18:38:09 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.553    18:38:09 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:23.553    18:38:09 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # jq -r '.[].uuid'
00:17:23.553    18:38:09 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.553   18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # uuid=0fe0ccb7-5375-4f60-991c-4371c3eed72e
00:17:23.553    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:23.554    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r .handle
00:17:23.554     18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # uuid2base64 0fe0ccb7-5375-4f60-991c-4371c3eed72e
00:17:23.554     18:38:10 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:23.554     18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # get_cipher AES_CBC
00:17:23.554     18:38:10 sma.sma_vhost -- sma/common.sh@27 -- # case "$1" in
00:17:23.554     18:38:10 sma.sma_vhost -- sma/common.sh@28 -- # echo 0
00:17:23.554     18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # format_key 1234567890abcdef1234567890abcdef
00:17:23.554     18:38:10 sma.sma_vhost -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:17:23.554      18:38:10 sma.sma_vhost -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:23.813  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:23.813  I0000 00:00:1731865090.351808  504609 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:23.813  I0000 00:00:1731865090.354249  504609 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:23.813  I0000 00:00:1731865090.356259  504621 subchannel.cc:806] subchannel 0x560ba2f69280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560ba2eeb880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560ba30aacf0, grpc.internal.client_channel_call_destination=0x7fad4c337390, grpc.internal.event_engine=0x560ba2b657d0, grpc.internal.security_connector=0x560ba2ec7a50, grpc.internal.subchannel_pool=0x560ba30dc4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560ba30df890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:10.355658007+01:00"}), backing off for 1000 ms
00:17:24.071  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 240
00:17:24.071  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 60
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:242
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:243
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:24.330   18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # devid0=virtio_blk:sma-0
00:17:24.330    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # jq -r '. | length'
00:17:24.330    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # rpc_cmd vhost_get_controllers
00:17:24.330    18:38:10 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:24.330    18:38:10 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 244
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 245
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:244
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:242
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f9343e00000
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f515a800000
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f515a800000
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:246
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:24.330  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:59
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:24.331  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:24.331    18:38:10 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:24.331   18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # [[ 1 -eq 1 ]]
00:17:24.331    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # rpc_cmd vhost_get_controllers
00:17:24.331    18:38:10 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:24.331    18:38:10 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:24.331    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # jq -r '.[].backend_specific.block.bdev'
00:17:24.331    18:38:10 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:24.331   18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # bdev=2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6
00:17:24.331    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # rpc_cmd bdev_get_bdevs
00:17:24.331    18:38:10 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:24.331    18:38:10 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:24.331    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:24.331    18:38:10 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:24.589   18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # crypto_bdev='{
00:17:24.589    "name": "2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6",
00:17:24.589    "aliases": [
00:17:24.589      "df9369fe-2e62-5078-b9e6-541f43aa97b1"
00:17:24.589    ],
00:17:24.590    "product_name": "crypto",
00:17:24.590    "block_size": 4096,
00:17:24.590    "num_blocks": 8192,
00:17:24.590    "uuid": "df9369fe-2e62-5078-b9e6-541f43aa97b1",
00:17:24.590    "assigned_rate_limits": {
00:17:24.590      "rw_ios_per_sec": 0,
00:17:24.590      "rw_mbytes_per_sec": 0,
00:17:24.590      "r_mbytes_per_sec": 0,
00:17:24.590      "w_mbytes_per_sec": 0
00:17:24.590    },
00:17:24.590    "claimed": false,
00:17:24.590    "zoned": false,
00:17:24.590    "supported_io_types": {
00:17:24.590      "read": true,
00:17:24.590      "write": true,
00:17:24.590      "unmap": true,
00:17:24.590      "flush": true,
00:17:24.590      "reset": true,
00:17:24.590      "nvme_admin": false,
00:17:24.590      "nvme_io": false,
00:17:24.590      "nvme_io_md": false,
00:17:24.590      "write_zeroes": true,
00:17:24.590      "zcopy": false,
00:17:24.590      "get_zone_info": false,
00:17:24.590      "zone_management": false,
00:17:24.590      "zone_append": false,
00:17:24.590      "compare": false,
00:17:24.590      "compare_and_write": false,
00:17:24.590      "abort": false,
00:17:24.590      "seek_hole": false,
00:17:24.590      "seek_data": false,
00:17:24.590      "copy": false,
00:17:24.590      "nvme_iov_md": false
00:17:24.590    },
00:17:24.590    "memory_domains": [
00:17:24.590      {
00:17:24.590        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:24.590        "dma_device_type": 2
00:17:24.590      }
00:17:24.590    ],
00:17:24.590    "driver_specific": {
00:17:24.590      "crypto": {
00:17:24.590        "base_bdev_name": "malloc0",
00:17:24.590        "name": "2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6",
00:17:24.590        "key_name": "2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6_AES_CBC"
00:17:24.590      }
00:17:24.590    }
00:17:24.590  }'
00:17:24.590    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # jq -r .driver_specific.crypto.name
00:17:24.590   18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # [[ 2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6 == \2\2\9\8\a\5\f\8\-\b\3\e\d\-\4\4\b\5\-\a\b\c\0\-\9\b\b\a\0\1\f\8\d\7\a\6 ]]
00:17:24.590    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # jq -r .driver_specific.crypto.key_name
00:17:24.590   18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # key_name=2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6_AES_CBC
00:17:24.590    18:38:10 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # rpc_cmd accel_crypto_keys_get -k 2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6_AES_CBC
00:17:24.590    18:38:10 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:24.590    18:38:10 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:24.590    18:38:11 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:24.590   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # key_obj='[
00:17:24.590  {
00:17:24.590  "name": "2298a5f8-b3ed-44b5-abc0-9bba01f8d7a6_AES_CBC",
00:17:24.590  "cipher": "AES_CBC",
00:17:24.590  "key": "1234567890abcdef1234567890abcdef"
00:17:24.590  }
00:17:24.590  ]'
00:17:24.590    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # jq -r '.[0].key'
00:17:24.590   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:24.590    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # jq -r '.[0].cipher'
00:17:24.590   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:24.590   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@205 -- # delete_device virtio_blk:sma-0
00:17:24.590   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:24.849  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:24.849  I0000 00:00:1731865091.269284  504861 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:24.849  I0000 00:00:1731865091.271081  504861 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:24.849  I0000 00:00:1731865091.272450  504862 subchannel.cc:806] subchannel 0x560329e09280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560329d8b880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560329f4acf0, grpc.internal.client_channel_call_destination=0x7f6600745390, grpc.internal.event_engine=0x560329a057d0, grpc.internal.security_connector=0x560329d67a50, grpc.internal.subchannel_pool=0x560329f7c4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560329f7f890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:11.271971002+01:00"}), backing off for 999 ms
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:1
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:35
00:17:24.849  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:24.849  {}
00:17:25.107    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # rpc_cmd bdev_get_bdevs
00:17:25.107    18:38:11 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:25.107    18:38:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:25.107    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:25.107    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r length
00:17:25.107    18:38:11 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:25.107   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # [[ '' -eq 0 ]]
00:17:25.108   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@209 -- # device_vhost=2
00:17:25.108    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r '.[].uuid'
00:17:25.108    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:25.108    18:38:11 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:25.108    18:38:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:25.108    18:38:11 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:25.108   18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # uuid=85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:25.108    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # create_device 0 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:25.108    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:25.108    18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # jq -r .handle
00:17:25.108     18:38:11 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:25.108     18:38:11 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:25.366  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:25.366  I0000 00:00:1731865091.776949  504894 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:25.366  I0000 00:00:1731865091.778702  504894 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:25.366  I0000 00:00:1731865091.780214  505014 subchannel.cc:806] subchannel 0x55f12070b280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f12068d880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f12084ccf0, grpc.internal.client_channel_call_destination=0x7f4b7a3c8390, grpc.internal.event_engine=0x55f1203077d0, grpc.internal.security_connector=0x55f120669a50, grpc.internal.subchannel_pool=0x55f12087e4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f120881890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:11.779702172+01:00"}), backing off for 1000 ms
00:17:25.366  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 240
00:17:25.366  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 58
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:242
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:243
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:25.931  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:25.931   18:38:12 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # device=virtio_blk:sma-0
00:17:25.931   18:38:12 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # diff /dev/fd/62 /dev/fd/61
00:17:25.931    18:38:12 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:17:25.931    18:38:12 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:17:25.932    18:38:12 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # get_qos_caps 2
00:17:25.932    18:38:12 sma.sma_vhost -- sma/common.sh@45 -- # local rootdir
00:17:25.932     18:38:12 sma.sma_vhost -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:25.932    18:38:12 sma.sma_vhost -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:17:25.932    18:38:12 sma.sma_vhost -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 60
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 244
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:60
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:242
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f9343e00000
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f511a600000
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f511a600000
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:245
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:246
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:25.932  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:26.190  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:26.190  I0000 00:00:1731865092.628528  505130 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:26.190  I0000 00:00:1731865092.630268  505130 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:26.190  I0000 00:00:1731865092.631588  505131 subchannel.cc:806] subchannel 0x556480e86eb0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556480e91de0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556480c6e0b0, grpc.internal.client_channel_call_destination=0x7f32a471d390, grpc.internal.event_engine=0x556480e6c7a0, grpc.internal.security_connector=0x556480e680c0, grpc.internal.subchannel_pool=0x556480e67f20, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556480e61d10, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:12.631134133+01:00"}), backing off for 1000 ms
00:17:26.190   18:38:12 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.190    18:38:12 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # uuid2base64 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:26.190    18:38:12 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:26.449  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:26.449  I0000 00:00:1731865092.956844  505151 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:26.449  I0000 00:00:1731865092.958595  505151 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:26.449  I0000 00:00:1731865092.959947  505271 subchannel.cc:806] subchannel 0x562b3aa9f280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562b3aa21880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562b3abe0cf0, grpc.internal.client_channel_call_destination=0x7f8f9dd1c390, grpc.internal.event_engine=0x562b3a8b0e40, grpc.internal.security_connector=0x562b3aa07aa0, grpc.internal.subchannel_pool=0x562b3ac124f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562b3ac15890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:12.959499568+01:00"}), backing off for 1000 ms
00:17:26.449  {}
00:17:26.707   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # diff /dev/fd/62 /dev/fd/61
00:17:26.707    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # rpc_cmd bdev_get_bdevs -b 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:26.707    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys
00:17:26.707    18:38:13 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.707    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:26.707    18:38:13 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:26.707    18:38:13 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.707   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@264 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.707  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:26.707  I0000 00:00:1731865093.267928  505382 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:26.707  I0000 00:00:1731865093.269556  505382 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:26.707  I0000 00:00:1731865093.270980  505383 subchannel.cc:806] subchannel 0x55dc2c9e6280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55dc2c968880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55dc2cb27cf0, grpc.internal.client_channel_call_destination=0x7ffb37cef390, grpc.internal.event_engine=0x55dc2c5e27d0, grpc.internal.security_connector=0x55dc2c94eaa0, grpc.internal.subchannel_pool=0x55dc2cb594f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55dc2cb5c890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:13.270414255+01:00"}), backing off for 1000 ms
00:17:26.966  {}
00:17:26.966   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # diff /dev/fd/62 /dev/fd/61
00:17:26.966    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys
00:17:26.966    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # rpc_cmd bdev_get_bdevs -b 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:26.966    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:26.966    18:38:13 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.966    18:38:13 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:26.966    18:38:13 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.966   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.966     18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuidgen
00:17:26.966    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuid2base64 5bf4b9b2-fbf4-4559-b172-6106616380cd
00:17:26.966    18:38:13 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:26.966    18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:26.966    18:38:13 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:26.966   18:38:13 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.226  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:27.226  I0000 00:00:1731865093.601694  505415 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:27.226  I0000 00:00:1731865093.603261  505415 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:27.226  I0000 00:00:1731865093.604569  505416 subchannel.cc:806] subchannel 0x564f0add4280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x564f0ad56880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x564f0af15cf0, grpc.internal.client_channel_call_destination=0x7fe37db2f390, grpc.internal.event_engine=0x564f0abe5e40, grpc.internal.security_connector=0x564f0ad3caa0, grpc.internal.subchannel_pool=0x564f0af474f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x564f0af4a890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:13.604110942+01:00"}), backing off for 1000 ms
00:17:27.226  [2024-11-17 18:38:13.642395] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 5bf4b9b2-fbf4-4559-b172-6106616380cd
00:17:27.226  Traceback (most recent call last):
00:17:27.226    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:27.226      main(sys.argv[1:])
00:17:27.226    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:27.226      result = client.call(request['method'], request.get('params', {}))
00:17:27.226               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.226    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:27.226      response = func(request=json_format.ParseDict(params, input()))
00:17:27.226                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.226    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:27.226      return _end_unary_response_blocking(state, call, False, None)
00:17:27.226             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.226    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:27.226      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:27.226      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.226  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:27.226  	status = StatusCode.INVALID_ARGUMENT
00:17:27.226  	details = "Specified volume is not attached to the device"
00:17:27.226  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:13.646966362+01:00", grpc_status:3, grpc_message:"Specified volume is not attached to the device"}"
00:17:27.226  >
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:27.226   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.226    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # base64
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:27.226    18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:27.226    18:38:13 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:27.226   18:38:13 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.485  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:27.485  I0000 00:00:1731865093.878315  505445 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:27.485  I0000 00:00:1731865093.880072  505445 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:27.486  I0000 00:00:1731865093.881468  505454 subchannel.cc:806] subchannel 0x55eaa8e3d280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55eaa8dbf880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55eaa8f7ecf0, grpc.internal.client_channel_call_destination=0x7f3139627390, grpc.internal.event_engine=0x55eaa8a397d0, grpc.internal.security_connector=0x55eaa8da5aa0, grpc.internal.subchannel_pool=0x55eaa8fb04f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55eaa8fb3890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:13.881004101+01:00"}), backing off for 999 ms
00:17:27.486  Traceback (most recent call last):
00:17:27.486    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:27.486      main(sys.argv[1:])
00:17:27.486    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:27.486      result = client.call(request['method'], request.get('params', {}))
00:17:27.486               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.486    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:27.486      response = func(request=json_format.ParseDict(params, input()))
00:17:27.486                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.486    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:27.486      return _end_unary_response_blocking(state, call, False, None)
00:17:27.486             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.486    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:27.486      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:27.486      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:27.486  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:27.486  	status = StatusCode.INVALID_ARGUMENT
00:17:27.486  	details = "Invalid volume uuid"
00:17:27.486  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:13.887605054+01:00", grpc_status:3, grpc_message:"Invalid volume uuid"}"
00:17:27.486  >
00:17:27.486   18:38:13 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:27.486   18:38:13 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:27.486   18:38:13 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:27.486   18:38:13 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:27.486   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # diff /dev/fd/62 /dev/fd/61
00:17:27.486    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # rpc_cmd bdev_get_bdevs -b 85c7b3f7-7bb7-4672-9a0d-95e89ea87120
00:17:27.486    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys
00:17:27.486    18:38:13 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:27.486    18:38:13 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:27.486    18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:27.486    18:38:13 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:27.486   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@344 -- # delete_device virtio_blk:sma-0
00:17:27.486   18:38:13 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.745  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:27.745  I0000 00:00:1731865094.170183  505480 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:27.745  I0000 00:00:1731865094.172027  505480 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:27.745  I0000 00:00:1731865094.173454  505614 subchannel.cc:806] subchannel 0x55bd7a881280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55bd7a803880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55bd7a9c2cf0, grpc.internal.client_channel_call_destination=0x7f812f647390, grpc.internal.event_engine=0x55bd7a47d7d0, grpc.internal.security_connector=0x55bd7a7dfa50, grpc.internal.subchannel_pool=0x55bd7a9f44f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55bd7a9f7890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:14.172948589+01:00"}), backing off for 999 ms
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:49
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:1
00:17:27.745  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:28.004  {}
00:17:28.004   18:38:14 sma.sma_vhost -- sma/vhost_blk.sh@346 -- # cleanup
00:17:28.004   18:38:14 sma.sma_vhost -- sma/vhost_blk.sh@14 -- # killprocess 501937
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 501937 ']'
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 501937
00:17:28.004    18:38:14 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:28.004    18:38:14 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501937
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501937'
00:17:28.004  killing process with pid 501937
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 501937
00:17:28.004   18:38:14 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 501937
00:17:28.264   18:38:14 sma.sma_vhost -- sma/vhost_blk.sh@15 -- # killprocess 502146
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 502146 ']'
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 502146
00:17:28.264    18:38:14 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:28.264    18:38:14 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 502146
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=python3
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 502146'
00:17:28.264  killing process with pid 502146
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 502146
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 502146
00:17:28.264   18:38:14 sma.sma_vhost -- sma/vhost_blk.sh@16 -- # vm_kill_all
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@476 -- # local vm
00:17:28.264    18:38:14 sma.sma_vhost -- vhost/common.sh@477 -- # vm_list_all
00:17:28.264    18:38:14 sma.sma_vhost -- vhost/common.sh@466 -- # vms=()
00:17:28.264    18:38:14 sma.sma_vhost -- vhost/common.sh@466 -- # local vms
00:17:28.264    18:38:14 sma.sma_vhost -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:17:28.264    18:38:14 sma.sma_vhost -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:17:28.264    18:38:14 sma.sma_vhost -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@478 -- # vm_kill 0
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@449 -- # local vm_pid
00:17:28.264    18:38:14 sma.sma_vhost -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@450 -- # vm_pid=497934
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=497934)'
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=497934)'
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=497934)'
00:17:28.264  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=497934)
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@454 -- # /bin/kill 497934
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@455 -- # notice 'process 497934 killed'
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'process 497934 killed'
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: process 497934 killed'
00:17:28.264  INFO: process 497934 killed
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:17:28.264   18:38:14 sma.sma_vhost -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:17:28.264   18:38:14 sma.sma_vhost -- sma/vhost_blk.sh@347 -- # trap - SIGINT SIGTERM EXIT
00:17:28.264  
00:17:28.264  real	0m41.565s
00:17:28.264  user	0m42.087s
00:17:28.264  sys	0m2.340s
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:28.264   18:38:14 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:28.264  ************************************
00:17:28.264  END TEST sma_vhost
00:17:28.264  ************************************
00:17:28.264   18:38:14 sma -- sma/sma.sh@16 -- # run_test sma_crypto /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:17:28.264   18:38:14 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:28.264   18:38:14 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:28.264   18:38:14 sma -- common/autotest_common.sh@10 -- # set +x
00:17:28.264  ************************************
00:17:28.264  START TEST sma_crypto
00:17:28.264  ************************************
00:17:28.264   18:38:14 sma.sma_crypto -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:17:28.264  * Looking for test storage...
00:17:28.264  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:28.264    18:38:14 sma.sma_crypto -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:17:28.264     18:38:14 sma.sma_crypto -- common/autotest_common.sh@1693 -- # lcov --version
00:17:28.264     18:38:14 sma.sma_crypto -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:17:28.524    18:38:14 sma.sma_crypto -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@336 -- # IFS=.-:
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@336 -- # read -ra ver1
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@337 -- # IFS=.-:
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@337 -- # read -ra ver2
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@338 -- # local 'op=<'
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@340 -- # ver1_l=2
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@341 -- # ver2_l=1
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@344 -- # case "$op" in
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@345 -- # : 1
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@365 -- # decimal 1
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@353 -- # local d=1
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@355 -- # echo 1
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@365 -- # ver1[v]=1
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@366 -- # decimal 2
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@353 -- # local d=2
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:28.524     18:38:14 sma.sma_crypto -- scripts/common.sh@355 -- # echo 2
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@366 -- # ver2[v]=2
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:28.524    18:38:14 sma.sma_crypto -- scripts/common.sh@368 -- # return 0
00:17:28.524    18:38:14 sma.sma_crypto -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:28.524    18:38:14 sma.sma_crypto -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:17:28.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.524  		--rc genhtml_branch_coverage=1
00:17:28.524  		--rc genhtml_function_coverage=1
00:17:28.524  		--rc genhtml_legend=1
00:17:28.524  		--rc geninfo_all_blocks=1
00:17:28.524  		--rc geninfo_unexecuted_blocks=1
00:17:28.524  		
00:17:28.524  		'
00:17:28.524    18:38:14 sma.sma_crypto -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:17:28.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.524  		--rc genhtml_branch_coverage=1
00:17:28.524  		--rc genhtml_function_coverage=1
00:17:28.524  		--rc genhtml_legend=1
00:17:28.524  		--rc geninfo_all_blocks=1
00:17:28.524  		--rc geninfo_unexecuted_blocks=1
00:17:28.524  		
00:17:28.524  		'
00:17:28.524    18:38:14 sma.sma_crypto -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:17:28.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.524  		--rc genhtml_branch_coverage=1
00:17:28.524  		--rc genhtml_function_coverage=1
00:17:28.524  		--rc genhtml_legend=1
00:17:28.524  		--rc geninfo_all_blocks=1
00:17:28.524  		--rc geninfo_unexecuted_blocks=1
00:17:28.524  		
00:17:28.524  		'
00:17:28.524    18:38:14 sma.sma_crypto -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:17:28.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.524  		--rc genhtml_branch_coverage=1
00:17:28.524  		--rc genhtml_function_coverage=1
00:17:28.524  		--rc genhtml_legend=1
00:17:28.524  		--rc geninfo_all_blocks=1
00:17:28.524  		--rc geninfo_unexecuted_blocks=1
00:17:28.524  		
00:17:28.524  		'
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@14 -- # localnqn=nqn.2016-06.io.spdk:cnode0
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@15 -- # tgtnqn=nqn.2016-06.io.spdk:tgt0
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@16 -- # key0=1234567890abcdef1234567890abcdef
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@17 -- # key1=deadbeefcafebabefeedbeefbabecafe
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@18 -- # tgtsock=/var/tmp/spdk.sock2
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@19 -- # discovery_port=8009
00:17:28.524   18:38:14 sma.sma_crypto -- sma/crypto.sh@145 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:28.525   18:38:14 sma.sma_crypto -- sma/crypto.sh@148 -- # hostpid=505775
00:17:28.525   18:38:14 sma.sma_crypto -- sma/crypto.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --wait-for-rpc
00:17:28.525   18:38:14 sma.sma_crypto -- sma/crypto.sh@150 -- # waitforlisten 505775
00:17:28.525   18:38:14 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 505775 ']'
00:17:28.525   18:38:14 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:28.525   18:38:14 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:28.525   18:38:14 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:28.525  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:28.525   18:38:14 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:28.525   18:38:14 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:28.525  [2024-11-17 18:38:14.991833] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:17:28.525  [2024-11-17 18:38:14.991994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505775 ]
00:17:28.525  EAL: No free 2048 kB hugepages reported on node 1
00:17:28.525  [2024-11-17 18:38:15.098803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:28.784  [2024-11-17 18:38:15.139123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:29.352   18:38:15 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:29.352   18:38:15 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:17:29.352   18:38:15 sma.sma_crypto -- sma/crypto.sh@153 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py dpdk_cryptodev_scan_accel_module
00:17:29.611   18:38:16 sma.sma_crypto -- sma/crypto.sh@154 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:17:29.611   18:38:16 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:29.611   18:38:16 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:29.611  [2024-11-17 18:38:16.058040] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:17:29.611   18:38:16 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:29.611   18:38:16 sma.sma_crypto -- sma/crypto.sh@155 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o encrypt -m dpdk_cryptodev
00:17:29.870  [2024-11-17 18:38:16.250537] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:17:29.870   18:38:16 sma.sma_crypto -- sma/crypto.sh@156 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o decrypt -m dpdk_cryptodev
00:17:30.129  [2024-11-17 18:38:16.471072] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:17:30.129   18:38:16 sma.sma_crypto -- sma/crypto.sh@157 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:17:30.388  [2024-11-17 18:38:16.746411] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:17:30.388   18:38:16 sma.sma_crypto -- sma/crypto.sh@159 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:17:30.388   18:38:16 sma.sma_crypto -- sma/crypto.sh@160 -- # tgtpid=506190
00:17:30.388   18:38:16 sma.sma_crypto -- sma/crypto.sh@172 -- # smapid=506191
00:17:30.388   18:38:16 sma.sma_crypto -- sma/crypto.sh@175 -- # sma_waitforlisten
00:17:30.388   18:38:16 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:30.388    18:38:16 sma.sma_crypto -- sma/crypto.sh@162 -- # cat
00:17:30.388   18:38:16 sma.sma_crypto -- sma/crypto.sh@162 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:30.388   18:38:16 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:17:30.388   18:38:16 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:17:30.388   18:38:16 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:30.388   18:38:16 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:30.647   18:38:16 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:17:30.647  [2024-11-17 18:38:17.043716] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:17:30.647  [2024-11-17 18:38:17.043841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506190 ]
00:17:30.647  EAL: No free 2048 kB hugepages reported on node 1
00:17:30.647  [2024-11-17 18:38:17.167021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:30.647  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:30.647  I0000 00:00:1731865097.172910  506191 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:30.647  [2024-11-17 18:38:17.186685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:30.647  [2024-11-17 18:38:17.208776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:31.583   18:38:17 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:17:31.583   18:38:17 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:31.583   18:38:17 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:31.583   18:38:18 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:17:31.583    18:38:18 sma.sma_crypto -- sma/crypto.sh@178 -- # uuidgen
00:17:31.583   18:38:18 sma.sma_crypto -- sma/crypto.sh@178 -- # uuid=03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:31.583   18:38:18 sma.sma_crypto -- sma/crypto.sh@179 -- # waitforlisten 506190 /var/tmp/spdk.sock2
00:17:31.583   18:38:18 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 506190 ']'
00:17:31.583   18:38:18 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:17:31.583   18:38:18 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:31.583   18:38:18 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:17:31.583  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:17:31.583   18:38:18 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:31.583   18:38:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:31.841   18:38:18 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:31.841   18:38:18 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:17:31.841   18:38:18 sma.sma_crypto -- sma/crypto.sh@180 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:17:32.100  [2024-11-17 18:38:18.423341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:32.100  [2024-11-17 18:38:18.439650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:17:32.100  [2024-11-17 18:38:18.447555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:17:32.100  malloc0
00:17:32.100    18:38:18 sma.sma_crypto -- sma/crypto.sh@190 -- # create_device
00:17:32.100    18:38:18 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.100    18:38:18 sma.sma_crypto -- sma/crypto.sh@190 -- # jq -r .handle
00:17:32.100  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:32.100  I0000 00:00:1731865098.663171  506436 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:32.100  I0000 00:00:1731865098.664730  506436 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:32.100  I0000 00:00:1731865098.666060  506437 subchannel.cc:806] subchannel 0x5623fac28280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5623fabaa880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5623fad69cf0, grpc.internal.client_channel_call_destination=0x7f52ae6eb390, grpc.internal.event_engine=0x5623faa39e40, grpc.internal.security_connector=0x5623fab90aa0, grpc.internal.subchannel_pool=0x5623fad9b4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5623fad9e890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:18.665628538+01:00"}), backing off for 1000 ms
00:17:32.358  [2024-11-17 18:38:18.687270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:32.358   18:38:18 sma.sma_crypto -- sma/crypto.sh@190 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:32.358   18:38:18 sma.sma_crypto -- sma/crypto.sh@193 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:32.358   18:38:18 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:32.358   18:38:18 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:32.358   18:38:18 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher= key= key2= config
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:32.358     18:38:18 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:32.358      18:38:18 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:32.358      18:38:18 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:32.358  "nvmf": {
00:17:32.358    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:32.358    "discovery": {
00:17:32.358      "discovery_endpoints": [
00:17:32.358        {
00:17:32.358          "trtype": "tcp",
00:17:32.358          "traddr": "127.0.0.1",
00:17:32.358          "trsvcid": "8009"
00:17:32.358        }
00:17:32.358      ]
00:17:32.358    }
00:17:32.358  }'
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n '' ]]
00:17:32.358    18:38:18 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:32.617  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:32.617  I0000 00:00:1731865099.006813  506458 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:32.617  I0000 00:00:1731865099.008552  506458 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:32.617  I0000 00:00:1731865099.009986  506667 subchannel.cc:806] subchannel 0x55f80698c280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f80690e880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f806acdcf0, grpc.internal.client_channel_call_destination=0x7f02ce86f390, grpc.internal.event_engine=0x55f8065887d0, grpc.internal.security_connector=0x55f806854eb0, grpc.internal.subchannel_pool=0x55f806aff4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f806b02890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:19.009550187+01:00"}), backing off for 1000 ms
00:17:33.991  {}
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@195 -- # jq -r '.[0].namespaces[0].name'
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@195 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:33.991   18:38:20 sma.sma_crypto -- sma/crypto.sh@195 -- # ns_bdev=7c8bdbf6-c3cb-477a-b148-e9e15e7710ae0n1
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@196 -- # rpc_cmd bdev_get_bdevs -b 7c8bdbf6-c3cb-477a-b148-e9e15e7710ae0n1
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@196 -- # jq -r '.[0].product_name'
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:33.991   18:38:20 sma.sma_crypto -- sma/crypto.sh@196 -- # [[ NVMe disk == \N\V\M\e\ \d\i\s\k ]]
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@197 -- # rpc_cmd bdev_get_bdevs
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@197 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:33.991   18:38:20 sma.sma_crypto -- sma/crypto.sh@197 -- # [[ 0 -eq 0 ]]
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@198 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@198 -- # jq -r '.[0].namespaces[0].uuid'
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:33.991   18:38:20 sma.sma_crypto -- sma/crypto.sh@198 -- # [[ 03ac9f90-b086-453e-b44a-49ba18c11aec == \0\3\a\c\9\f\9\0\-\b\0\8\6\-\4\5\3\e\-\b\4\4\a\-\4\9\b\a\1\8\c\1\1\a\e\c ]]
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@199 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@199 -- # jq -r '.[0].namespaces[0].nguid'
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:33.991    18:38:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:33.991    18:38:20 sma.sma_crypto -- sma/crypto.sh@199 -- # uuid2nguid 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:33.991    18:38:20 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=03AC9F90-B086-453E-B44A-49BA18C11AEC
00:17:33.991    18:38:20 sma.sma_crypto -- sma/common.sh@41 -- # echo 03AC9F90B086453EB44A49BA18C11AEC
00:17:33.991   18:38:20 sma.sma_crypto -- sma/crypto.sh@199 -- # [[ 03AC9F90B086453EB44A49BA18C11AEC == \0\3\A\C\9\F\9\0\B\0\8\6\4\5\3\E\B\4\4\A\4\9\B\A\1\8\C\1\1\A\E\C ]]
00:17:33.992   18:38:20 sma.sma_crypto -- sma/crypto.sh@201 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:33.992   18:38:20 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.992    18:38:20 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:33.992    18:38:20 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:34.250  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:34.250  I0000 00:00:1731865100.620914  506909 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:34.250  I0000 00:00:1731865100.622640  506909 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:34.250  I0000 00:00:1731865100.624016  506914 subchannel.cc:806] subchannel 0x559086158280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5590860da880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559086299cf0, grpc.internal.client_channel_call_destination=0x7fddd0fe9390, grpc.internal.event_engine=0x559085f69e40, grpc.internal.security_connector=0x5590860c0aa0, grpc.internal.subchannel_pool=0x5590862cb4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5590862ce890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:20.623597259+01:00"}), backing off for 1000 ms
00:17:34.250  {}
00:17:34.250   18:38:20 sma.sma_crypto -- sma/crypto.sh@204 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:34.250   18:38:20 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:34.250   18:38:20 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:34.250   18:38:20 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:34.250    18:38:20 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:34.251     18:38:20 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:34.251      18:38:20 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:34.251      18:38:20 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:34.251  "nvmf": {
00:17:34.251    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:34.251    "discovery": {
00:17:34.251      "discovery_endpoints": [
00:17:34.251        {
00:17:34.251          "trtype": "tcp",
00:17:34.251          "traddr": "127.0.0.1",
00:17:34.251          "trsvcid": "8009"
00:17:34.251        }
00:17:34.251      ]
00:17:34.251    }
00:17:34.251  }'
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:34.251     18:38:20 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:34.251     18:38:20 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:34.251     18:38:20 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:34.251     18:38:20 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:34.251     18:38:20 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:34.251      18:38:20 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:34.251     18:38:20 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:34.251    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:34.251  }'
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:34.251    18:38:20 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:34.509  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:34.509  I0000 00:00:1731865101.032137  506938 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:34.509  I0000 00:00:1731865101.034118  506938 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:34.509  I0000 00:00:1731865101.035657  506957 subchannel.cc:806] subchannel 0x556db87b2280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556db8734880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556db88f3cf0, grpc.internal.client_channel_call_destination=0x7f545807e390, grpc.internal.event_engine=0x556db83ae7d0, grpc.internal.security_connector=0x556db8710a50, grpc.internal.subchannel_pool=0x556db89254f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556db8928890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:21.035119959+01:00"}), backing off for 1000 ms
00:17:35.887  {}
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@206 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@206 -- # jq -r '. | length'
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@206 -- # [[ 1 -eq 1 ]]
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@207 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@207 -- # jq -r '.[0].namespaces | length'
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@207 -- # [[ 1 -eq 1 ]]
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@209 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=03ac9f90-b086-453e-b44a-49ba18c11aec ns ns_bdev
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:35.887    "nsid": 1,
00:17:35.887    "bdev_name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:35.887    "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:35.887    "nguid": "03AC9F90B086453EB44A49BA18C11AEC",
00:17:35.887    "uuid": "03ac9f90-b086-453e-b44a-49ba18c11aec"
00:17:35.887  }'
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:35.887    18:38:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.887   18:38:22 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:35.887    18:38:22 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 03ac9f90-b086-453e-b44a-49ba18c11aec == \0\3\a\c\9\f\9\0\-\b\0\8\6\-\4\5\3\e\-\b\4\4\a\-\4\9\b\a\1\8\c\1\1\a\e\c ]]
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:36.146    18:38:22 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=03AC9F90-B086-453E-B44A-49BA18C11AEC
00:17:36.146    18:38:22 sma.sma_crypto -- sma/common.sh@41 -- # echo 03AC9F90B086453EB44A49BA18C11AEC
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 03AC9F90B086453EB44A49BA18C11AEC == \0\3\A\C\9\F\9\0\B\0\8\6\4\5\3\E\B\4\4\A\4\9\B\A\1\8\C\1\1\A\E\C ]]
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@211 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@211 -- # rpc_cmd bdev_get_bdevs
00:17:36.146    18:38:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.146    18:38:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.146    18:38:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@211 -- # crypto_bdev='{
00:17:36.146    "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:36.146    "aliases": [
00:17:36.146      "35590aa2-cdbe-5972-a20a-4a97443fb718"
00:17:36.146    ],
00:17:36.146    "product_name": "crypto",
00:17:36.146    "block_size": 4096,
00:17:36.146    "num_blocks": 8192,
00:17:36.146    "uuid": "35590aa2-cdbe-5972-a20a-4a97443fb718",
00:17:36.146    "assigned_rate_limits": {
00:17:36.146      "rw_ios_per_sec": 0,
00:17:36.146      "rw_mbytes_per_sec": 0,
00:17:36.146      "r_mbytes_per_sec": 0,
00:17:36.146      "w_mbytes_per_sec": 0
00:17:36.146    },
00:17:36.146    "claimed": true,
00:17:36.146    "claim_type": "exclusive_write",
00:17:36.146    "zoned": false,
00:17:36.146    "supported_io_types": {
00:17:36.146      "read": true,
00:17:36.146      "write": true,
00:17:36.146      "unmap": true,
00:17:36.146      "flush": true,
00:17:36.146      "reset": true,
00:17:36.146      "nvme_admin": false,
00:17:36.146      "nvme_io": false,
00:17:36.146      "nvme_io_md": false,
00:17:36.146      "write_zeroes": true,
00:17:36.146      "zcopy": false,
00:17:36.146      "get_zone_info": false,
00:17:36.146      "zone_management": false,
00:17:36.146      "zone_append": false,
00:17:36.146      "compare": false,
00:17:36.146      "compare_and_write": false,
00:17:36.146      "abort": false,
00:17:36.146      "seek_hole": false,
00:17:36.146      "seek_data": false,
00:17:36.146      "copy": false,
00:17:36.146      "nvme_iov_md": false
00:17:36.146    },
00:17:36.146    "memory_domains": [
00:17:36.146      {
00:17:36.146        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:36.146        "dma_device_type": 2
00:17:36.146      }
00:17:36.146    ],
00:17:36.146    "driver_specific": {
00:17:36.146      "crypto": {
00:17:36.146        "base_bdev_name": "591c5a3b-f5f5-4a6a-a485-343a6e9b2b120n1",
00:17:36.146        "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:36.146        "key_name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC"
00:17:36.146      }
00:17:36.146    }
00:17:36.146  }'
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@212 -- # jq -r .driver_specific.crypto.key_name
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@212 -- # key_name=4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@213 -- # rpc_cmd accel_crypto_keys_get -k 4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC
00:17:36.146    18:38:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.146    18:38:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.146    18:38:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@213 -- # key_obj='[
00:17:36.146  {
00:17:36.146  "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC",
00:17:36.146  "cipher": "AES_CBC",
00:17:36.146  "key": "1234567890abcdef1234567890abcdef"
00:17:36.146  }
00:17:36.146  ]'
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@214 -- # jq -r '.[0].key'
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@214 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@215 -- # jq -r '.[0].cipher'
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@215 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@218 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:36.146   18:38:22 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:36.146    18:38:22 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:36.146     18:38:22 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:36.146      18:38:22 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:36.146      18:38:22 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:36.405  "nvmf": {
00:17:36.405    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:36.405    "discovery": {
00:17:36.405      "discovery_endpoints": [
00:17:36.405        {
00:17:36.405          "trtype": "tcp",
00:17:36.405          "traddr": "127.0.0.1",
00:17:36.405          "trsvcid": "8009"
00:17:36.405        }
00:17:36.405      ]
00:17:36.405    }
00:17:36.405  }'
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:36.405     18:38:22 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:36.405     18:38:22 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:36.405     18:38:22 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:36.405     18:38:22 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:36.405     18:38:22 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:36.405      18:38:22 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:36.405     18:38:22 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:36.405    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:36.405  }'
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:36.405    18:38:22 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:36.405  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:36.405  I0000 00:00:1731865102.953988  507411 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:36.405  I0000 00:00:1731865102.955584  507411 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:36.405  I0000 00:00:1731865102.957092  507430 subchannel.cc:806] subchannel 0x55cf90137280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cf900b9880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cf90278cf0, grpc.internal.client_channel_call_destination=0x7fadf25ac390, grpc.internal.event_engine=0x55cf8fd337d0, grpc.internal.security_connector=0x55cf90095a50, grpc.internal.subchannel_pool=0x55cf902aa4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cf902ad890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:22.95660347+01:00"}), backing off for 1000 ms
00:17:36.664  {}
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@221 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@221 -- # jq -r '. | length'
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.664   18:38:23 sma.sma_crypto -- sma/crypto.sh@221 -- # [[ 1 -eq 1 ]]
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@222 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@222 -- # jq -r '.[0].namespaces | length'
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.664   18:38:23 sma.sma_crypto -- sma/crypto.sh@222 -- # [[ 1 -eq 1 ]]
00:17:36.664   18:38:23 sma.sma_crypto -- sma/crypto.sh@223 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:36.664   18:38:23 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=03ac9f90-b086-453e-b44a-49ba18c11aec ns ns_bdev
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.664   18:38:23 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:36.664    "nsid": 1,
00:17:36.664    "bdev_name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:36.664    "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:36.664    "nguid": "03AC9F90B086453EB44A49BA18C11AEC",
00:17:36.664    "uuid": "03ac9f90-b086-453e-b44a-49ba18c11aec"
00:17:36.664  }'
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:36.664   18:38:23 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0
00:17:36.664    18:38:23 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:36.664    18:38:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.665    18:38:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.665    18:38:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.665   18:38:23 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:36.665    18:38:23 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:36.665    18:38:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.665    18:38:23 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:36.665    18:38:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.665    18:38:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.665   18:38:23 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:36.665    18:38:23 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:36.924   18:38:23 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 03ac9f90-b086-453e-b44a-49ba18c11aec == \0\3\a\c\9\f\9\0\-\b\0\8\6\-\4\5\3\e\-\b\4\4\a\-\4\9\b\a\1\8\c\1\1\a\e\c ]]
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:36.924    18:38:23 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=03AC9F90-B086-453E-B44A-49BA18C11AEC
00:17:36.924    18:38:23 sma.sma_crypto -- sma/common.sh@41 -- # echo 03AC9F90B086453EB44A49BA18C11AEC
00:17:36.924   18:38:23 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 03AC9F90B086453EB44A49BA18C11AEC == \0\3\A\C\9\F\9\0\B\0\8\6\4\5\3\E\B\4\4\A\4\9\B\A\1\8\C\1\1\A\E\C ]]
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@224 -- # rpc_cmd bdev_get_bdevs
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@224 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:36.924    18:38:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.924    18:38:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.924    18:38:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.924   18:38:23 sma.sma_crypto -- sma/crypto.sh@224 -- # crypto_bdev2='{
00:17:36.924    "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:36.924    "aliases": [
00:17:36.924      "35590aa2-cdbe-5972-a20a-4a97443fb718"
00:17:36.924    ],
00:17:36.924    "product_name": "crypto",
00:17:36.924    "block_size": 4096,
00:17:36.924    "num_blocks": 8192,
00:17:36.924    "uuid": "35590aa2-cdbe-5972-a20a-4a97443fb718",
00:17:36.924    "assigned_rate_limits": {
00:17:36.924      "rw_ios_per_sec": 0,
00:17:36.924      "rw_mbytes_per_sec": 0,
00:17:36.924      "r_mbytes_per_sec": 0,
00:17:36.924      "w_mbytes_per_sec": 0
00:17:36.924    },
00:17:36.924    "claimed": true,
00:17:36.924    "claim_type": "exclusive_write",
00:17:36.924    "zoned": false,
00:17:36.924    "supported_io_types": {
00:17:36.924      "read": true,
00:17:36.924      "write": true,
00:17:36.924      "unmap": true,
00:17:36.924      "flush": true,
00:17:36.924      "reset": true,
00:17:36.924      "nvme_admin": false,
00:17:36.924      "nvme_io": false,
00:17:36.924      "nvme_io_md": false,
00:17:36.924      "write_zeroes": true,
00:17:36.924      "zcopy": false,
00:17:36.924      "get_zone_info": false,
00:17:36.924      "zone_management": false,
00:17:36.924      "zone_append": false,
00:17:36.924      "compare": false,
00:17:36.924      "compare_and_write": false,
00:17:36.924      "abort": false,
00:17:36.924      "seek_hole": false,
00:17:36.924      "seek_data": false,
00:17:36.924      "copy": false,
00:17:36.924      "nvme_iov_md": false
00:17:36.924    },
00:17:36.924    "memory_domains": [
00:17:36.924      {
00:17:36.924        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:36.924        "dma_device_type": 2
00:17:36.924      }
00:17:36.924    ],
00:17:36.924    "driver_specific": {
00:17:36.924      "crypto": {
00:17:36.924        "base_bdev_name": "591c5a3b-f5f5-4a6a-a485-343a6e9b2b120n1",
00:17:36.924        "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:36.924        "key_name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC"
00:17:36.924      }
00:17:36.924    }
00:17:36.924  }'
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:17:36.924   18:38:23 sma.sma_crypto -- sma/crypto.sh@225 -- # [[ 4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0 == 4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0 ]]
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@226 -- # jq -r .driver_specific.crypto.key_name
00:17:36.924   18:38:23 sma.sma_crypto -- sma/crypto.sh@226 -- # key_name=4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@227 -- # rpc_cmd accel_crypto_keys_get -k 4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC
00:17:36.924    18:38:23 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.924    18:38:23 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:36.924    18:38:23 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.924   18:38:23 sma.sma_crypto -- sma/crypto.sh@227 -- # key_obj='[
00:17:36.924  {
00:17:36.924  "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0_AES_CBC",
00:17:36.924  "cipher": "AES_CBC",
00:17:36.924  "key": "1234567890abcdef1234567890abcdef"
00:17:36.924  }
00:17:36.924  ]'
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@228 -- # jq -r '.[0].key'
00:17:36.924   18:38:23 sma.sma_crypto -- sma/crypto.sh@228 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:36.924    18:38:23 sma.sma_crypto -- sma/crypto.sh@229 -- # jq -r '.[0].cipher'
00:17:37.183   18:38:23 sma.sma_crypto -- sma/crypto.sh@229 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:37.183   18:38:23 sma.sma_crypto -- sma/crypto.sh@232 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_XTS 1234567890abcdef1234567890abcdef
00:17:37.183   18:38:23 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:37.183   18:38:23 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_XTS 1234567890abcdef1234567890abcdef
00:17:37.183   18:38:23 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:37.183   18:38:23 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:37.183    18:38:23 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:37.183   18:38:23 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:37.183   18:38:23 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_XTS 1234567890abcdef1234567890abcdef
00:17:37.183   18:38:23 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:37.183   18:38:23 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:37.183   18:38:23 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:37.183    18:38:23 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec AES_XTS 1234567890abcdef1234567890abcdef
00:17:37.183    18:38:23 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=AES_XTS key=1234567890abcdef1234567890abcdef key2= config
00:17:37.183    18:38:23 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:37.183     18:38:23 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:37.183      18:38:23 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:37.183      18:38:23 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:37.183    18:38:23 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:37.183  "nvmf": {
00:17:37.183    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:37.183    "discovery": {
00:17:37.183      "discovery_endpoints": [
00:17:37.183        {
00:17:37.183          "trtype": "tcp",
00:17:37.183          "traddr": "127.0.0.1",
00:17:37.183          "trsvcid": "8009"
00:17:37.183        }
00:17:37.184      ]
00:17:37.184    }
00:17:37.184  }'
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_XTS ]]
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:37.184     18:38:23 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_XTS
00:17:37.184     18:38:23 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:37.184     18:38:23 sma.sma_crypto -- sma/common.sh@29 -- # echo 1
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:37.184     18:38:23 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:37.184     18:38:23 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:37.184      18:38:23 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:37.184     18:38:23 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:37.184    "cipher": 1,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:37.184  }'
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:37.184    18:38:23 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:37.443  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:37.443  I0000 00:00:1731865103.887467  507493 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:37.443  I0000 00:00:1731865103.889324  507493 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:37.443  I0000 00:00:1731865103.891019  507704 subchannel.cc:806] subchannel 0x560ca0be4280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560ca0b66880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560ca0d25cf0, grpc.internal.client_channel_call_destination=0x7f3d85435390, grpc.internal.event_engine=0x560ca07e07d0, grpc.internal.security_connector=0x560ca0b42a50, grpc.internal.subchannel_pool=0x560ca0d574f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560ca0d5a890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:23.890559677+01:00"}), backing off for 1000 ms
00:17:37.443  Traceback (most recent call last):
00:17:37.443    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:37.443      main(sys.argv[1:])
00:17:37.443    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:37.443      result = client.call(request['method'], request.get('params', {}))
00:17:37.443               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.443    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:37.443      response = func(request=json_format.ParseDict(params, input()))
00:17:37.443                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.443    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:37.443      return _end_unary_response_blocking(state, call, False, None)
00:17:37.443             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.443    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:37.443      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:37.443      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.443  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:37.443  	status = StatusCode.INVALID_ARGUMENT
00:17:37.443  	details = "Invalid volume crypto configuration: bad cipher"
00:17:37.443  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-11-17T18:38:23.908853908+01:00"}"
00:17:37.443  >
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:37.443   18:38:23 sma.sma_crypto -- sma/crypto.sh@234 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:37.443    18:38:23 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:37.443   18:38:23 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:37.443   18:38:23 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:37.443   18:38:23 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:37.443   18:38:23 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=AES_CBC key=deadbeefcafebabefeedbeefbabecafe key2= config
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:37.443     18:38:23 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:37.443      18:38:23 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:37.443      18:38:23 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:37.443  "nvmf": {
00:17:37.443    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:37.443    "discovery": {
00:17:37.443      "discovery_endpoints": [
00:17:37.443        {
00:17:37.443          "trtype": "tcp",
00:17:37.443          "traddr": "127.0.0.1",
00:17:37.443          "trsvcid": "8009"
00:17:37.443        }
00:17:37.443      ]
00:17:37.443    }
00:17:37.443  }'
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:37.443     18:38:23 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:37.443     18:38:23 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:37.443     18:38:23 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:37.443     18:38:23 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:17:37.443     18:38:23 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:37.443      18:38:23 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:37.443     18:38:23 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:37.443    "cipher": 0,"key": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:17:37.443  }'
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:37.443    18:38:23 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:37.703  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:37.703  I0000 00:00:1731865104.193195  507725 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:37.703  I0000 00:00:1731865104.194760  507725 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:37.703  I0000 00:00:1731865104.196259  507742 subchannel.cc:806] subchannel 0x560a7cb57280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560a7cad9880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560a7cc98cf0, grpc.internal.client_channel_call_destination=0x7f00b327a390, grpc.internal.event_engine=0x560a7c7537d0, grpc.internal.security_connector=0x560a7cab5a50, grpc.internal.subchannel_pool=0x560a7ccca4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560a7cccd890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:24.195751226+01:00"}), backing off for 1000 ms
00:17:37.703  Traceback (most recent call last):
00:17:37.703    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:37.703      main(sys.argv[1:])
00:17:37.703    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:37.703      result = client.call(request['method'], request.get('params', {}))
00:17:37.703               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.703    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:37.703      response = func(request=json_format.ParseDict(params, input()))
00:17:37.703                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.703    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:37.703      return _end_unary_response_blocking(state, call, False, None)
00:17:37.703             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.703    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:37.703      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:37.703      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.703  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:37.703  	status = StatusCode.INVALID_ARGUMENT
00:17:37.703  	details = "Invalid volume crypto configuration: bad key"
00:17:37.703  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:24.214041258+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad key"}"
00:17:37.703  >
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:37.703   18:38:24 sma.sma_crypto -- sma/crypto.sh@236 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:37.703    18:38:24 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:37.703   18:38:24 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:37.703   18:38:24 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:37.703   18:38:24 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:37.703   18:38:24 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:37.703    18:38:24 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:37.703    18:38:24 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2=deadbeefcafebabefeedbeefbabecafe config
00:17:37.703    18:38:24 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:37.703     18:38:24 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:37.703      18:38:24 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:37.703      18:38:24 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:37.962  "nvmf": {
00:17:37.962    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:37.962    "discovery": {
00:17:37.962      "discovery_endpoints": [
00:17:37.962        {
00:17:37.962          "trtype": "tcp",
00:17:37.962          "traddr": "127.0.0.1",
00:17:37.962          "trsvcid": "8009"
00:17:37.962        }
00:17:37.962      ]
00:17:37.962    }
00:17:37.962  }'
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:37.962     18:38:24 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:37.962     18:38:24 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:37.962     18:38:24 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:37.962     18:38:24 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:37.962     18:38:24 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:37.962      18:38:24 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n deadbeefcafebabefeedbeefbabecafe ]]
00:17:37.962    18:38:24 sma.sma_crypto -- sma/crypto.sh@55 -- # crypto+=("\"key2\": \"$(format_key $key2)\"")
00:17:37.962     18:38:24 sma.sma_crypto -- sma/crypto.sh@55 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:17:37.962     18:38:24 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:37.962      18:38:24 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:17:37.962     18:38:24 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:37.963    18:38:24 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:37.963    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=","key2": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:17:37.963  }'
00:17:37.963    18:38:24 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:37.963    18:38:24 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:37.963  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:37.963  I0000 00:00:1731865104.504524  507763 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:37.963  I0000 00:00:1731865104.506115  507763 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:37.963  I0000 00:00:1731865104.507578  507782 subchannel.cc:806] subchannel 0x5629ddb05280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5629ddc46cf0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5629ddba2da0, grpc.internal.client_channel_call_destination=0x7f60652d4390, grpc.internal.event_engine=0x5629ddbfe980, grpc.internal.security_connector=0x5629dda63a50, grpc.internal.subchannel_pool=0x5629ddc7c560, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5629ddc87d60, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:24.507135648+01:00"}), backing off for 1000 ms
00:17:37.963  Traceback (most recent call last):
00:17:37.963    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:37.963      main(sys.argv[1:])
00:17:37.963    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:37.963      result = client.call(request['method'], request.get('params', {}))
00:17:37.963               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.963    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:37.963      response = func(request=json_format.ParseDict(params, input()))
00:17:37.963                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.963    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:37.963      return _end_unary_response_blocking(state, call, False, None)
00:17:37.963             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.963    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:37.963      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:37.963      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:37.963  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:37.963  	status = StatusCode.INVALID_ARGUMENT
00:17:37.963  	details = "Invalid volume crypto configuration: bad key2"
00:17:37.963  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:24.522606633+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad key2"}"
00:17:37.963  >
00:17:38.221   18:38:24 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:38.221   18:38:24 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:38.222   18:38:24 sma.sma_crypto -- sma/crypto.sh@238 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:38.222    18:38:24 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:38.222   18:38:24 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:38.222   18:38:24 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:38.222   18:38:24 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:38.222   18:38:24 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:38.222     18:38:24 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:38.222      18:38:24 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:38.222      18:38:24 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:38.222  "nvmf": {
00:17:38.222    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:38.222    "discovery": {
00:17:38.222      "discovery_endpoints": [
00:17:38.222        {
00:17:38.222          "trtype": "tcp",
00:17:38.222          "traddr": "127.0.0.1",
00:17:38.222          "trsvcid": "8009"
00:17:38.222        }
00:17:38.222      ]
00:17:38.222    }
00:17:38.222  }'
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:38.222     18:38:24 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:38.222     18:38:24 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:38.222     18:38:24 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:38.222     18:38:24 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:38.222     18:38:24 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:38.222      18:38:24 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:38.222     18:38:24 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:38.222    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:38.222  }'
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:38.222    18:38:24 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:38.222  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:38.222  I0000 00:00:1731865104.794936  507803 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:38.222  I0000 00:00:1731865104.796705  507803 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:38.481  I0000 00:00:1731865104.798347  507846 subchannel.cc:806] subchannel 0x5565343b5280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556534337880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5565344f6cf0, grpc.internal.client_channel_call_destination=0x7fc8275e6390, grpc.internal.event_engine=0x556533fb17d0, grpc.internal.security_connector=0x556534313a50, grpc.internal.subchannel_pool=0x5565345284f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55653452b890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:24.797780076+01:00"}), backing off for 1000 ms
00:17:38.481  Traceback (most recent call last):
00:17:38.481    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:38.481      main(sys.argv[1:])
00:17:38.481    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:38.481      result = client.call(request['method'], request.get('params', {}))
00:17:38.481               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:38.481    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:38.481      response = func(request=json_format.ParseDict(params, input()))
00:17:38.481                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:38.481    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:38.481      return _end_unary_response_blocking(state, call, False, None)
00:17:38.481             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:38.481    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:38.481      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:38.481      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:38.481  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:38.481  	status = StatusCode.INVALID_ARGUMENT
00:17:38.481  	details = "Invalid volume crypto configuration: bad cipher"
00:17:38.481  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:24.815519456+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:38.481  >
00:17:38.481   18:38:24 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:38.481   18:38:24 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:38.481   18:38:24 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:38.481   18:38:24 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:38.481   18:38:24 sma.sma_crypto -- sma/crypto.sh@241 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:38.481   18:38:24 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=03ac9f90-b086-453e-b44a-49ba18c11aec ns ns_bdev
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.481   18:38:24 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:38.481    "nsid": 1,
00:17:38.481    "bdev_name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:38.481    "name": "4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0",
00:17:38.481    "nguid": "03AC9F90B086453EB44A49BA18C11AEC",
00:17:38.481    "uuid": "03ac9f90-b086-453e-b44a-49ba18c11aec"
00:17:38.481  }'
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:38.481   18:38:24 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 4a708eb5-f8b8-40be-b6f3-3bbd55c4b2b0
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.481   18:38:24 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:38.481    18:38:24 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.481   18:38:24 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:38.481    18:38:24 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:38.481   18:38:25 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 03ac9f90-b086-453e-b44a-49ba18c11aec == \0\3\a\c\9\f\9\0\-\b\0\8\6\-\4\5\3\e\-\b\4\4\a\-\4\9\b\a\1\8\c\1\1\a\e\c ]]
00:17:38.481    18:38:25 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:38.740    18:38:25 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:38.740    18:38:25 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=03AC9F90-B086-453E-B44A-49BA18C11AEC
00:17:38.740    18:38:25 sma.sma_crypto -- sma/common.sh@41 -- # echo 03AC9F90B086453EB44A49BA18C11AEC
00:17:38.740   18:38:25 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 03AC9F90B086453EB44A49BA18C11AEC == \0\3\A\C\9\F\9\0\B\0\8\6\4\5\3\E\B\4\4\A\4\9\B\A\1\8\C\1\1\A\E\C ]]
00:17:38.740   18:38:25 sma.sma_crypto -- sma/crypto.sh@243 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:38.740   18:38:25 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.740    18:38:25 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:38.740    18:38:25 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:39.000  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:39.000  I0000 00:00:1731865105.351130  508051 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:39.000  I0000 00:00:1731865105.352989  508051 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:39.000  I0000 00:00:1731865105.354325  508054 subchannel.cc:806] subchannel 0x560416b4d280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560416acf880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560416c8ecf0, grpc.internal.client_channel_call_destination=0x7f799fb64390, grpc.internal.event_engine=0x56041695ee40, grpc.internal.security_connector=0x560416ab5aa0, grpc.internal.subchannel_pool=0x560416cc04f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560416cc3890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:25.353840832+01:00"}), backing off for 999 ms
00:17:39.000  {}
00:17:39.000   18:38:25 sma.sma_crypto -- sma/crypto.sh@247 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:39.000   18:38:25 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:39.000   18:38:25 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:39.000   18:38:25 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:39.000   18:38:25 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.000    18:38:25 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:39.000   18:38:25 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.000   18:38:25 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:39.000   18:38:25 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:39.000   18:38:25 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:39.000   18:38:25 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:39.000     18:38:25 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:39.000      18:38:25 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:39.000      18:38:25 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:39.000  "nvmf": {
00:17:39.000    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:39.000    "discovery": {
00:17:39.000      "discovery_endpoints": [
00:17:39.000        {
00:17:39.000          "trtype": "tcp",
00:17:39.000          "traddr": "127.0.0.1",
00:17:39.000          "trsvcid": "8009"
00:17:39.000        }
00:17:39.000      ]
00:17:39.000    }
00:17:39.000  }'
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:39.000     18:38:25 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:39.000     18:38:25 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:39.000     18:38:25 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:39.000     18:38:25 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:39.000     18:38:25 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:39.000      18:38:25 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:39.000     18:38:25 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:39.000    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:39.000  }'
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:39.000    18:38:25 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:39.259  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:39.259  I0000 00:00:1731865105.695115  508077 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:39.259  I0000 00:00:1731865105.696835  508077 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:39.259  I0000 00:00:1731865105.698199  508091 subchannel.cc:806] subchannel 0x558b54a6c280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558b549ee880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558b54badcf0, grpc.internal.client_channel_call_destination=0x7fbe500bc390, grpc.internal.event_engine=0x558b546687d0, grpc.internal.security_connector=0x558b549caa50, grpc.internal.subchannel_pool=0x558b54bdf4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558b54be2890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:25.697740494+01:00"}), backing off for 1000 ms
00:17:40.636  Traceback (most recent call last):
00:17:40.636    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:40.636      main(sys.argv[1:])
00:17:40.636    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:40.636      result = client.call(request['method'], request.get('params', {}))
00:17:40.636               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:40.636    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:40.636      response = func(request=json_format.ParseDict(params, input()))
00:17:40.636                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:40.636    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:40.636      return _end_unary_response_blocking(state, call, False, None)
00:17:40.636             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:40.636    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:40.636      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:40.636      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:40.636  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:40.636  	status = StatusCode.INVALID_ARGUMENT
00:17:40.636  	details = "Invalid volume crypto configuration: bad cipher"
00:17:40.636  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:26.809480155+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:40.636  >
00:17:40.636   18:38:26 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:40.636   18:38:26 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:40.636   18:38:26 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:40.636   18:38:26 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:40.636    18:38:26 sma.sma_crypto -- sma/crypto.sh@248 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:40.636    18:38:26 sma.sma_crypto -- sma/crypto.sh@248 -- # jq -r '.[0].namespaces | length'
00:17:40.636    18:38:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.636    18:38:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:40.636    18:38:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.636   18:38:26 sma.sma_crypto -- sma/crypto.sh@248 -- # [[ 0 -eq 0 ]]
00:17:40.636    18:38:26 sma.sma_crypto -- sma/crypto.sh@249 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:40.636    18:38:26 sma.sma_crypto -- sma/crypto.sh@249 -- # jq -r '. | length'
00:17:40.636    18:38:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.636    18:38:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:40.636    18:38:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.636   18:38:26 sma.sma_crypto -- sma/crypto.sh@249 -- # [[ 0 -eq 0 ]]
00:17:40.636    18:38:26 sma.sma_crypto -- sma/crypto.sh@250 -- # rpc_cmd bdev_get_bdevs
00:17:40.636    18:38:26 sma.sma_crypto -- sma/crypto.sh@250 -- # jq -r length
00:17:40.637    18:38:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.637    18:38:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:40.637    18:38:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.637   18:38:26 sma.sma_crypto -- sma/crypto.sh@250 -- # [[ 0 -eq 0 ]]
00:17:40.637   18:38:26 sma.sma_crypto -- sma/crypto.sh@252 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:40.637   18:38:26 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:40.637  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:40.637  I0000 00:00:1731865107.198925  508335 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:40.637  I0000 00:00:1731865107.200524  508335 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:40.637  I0000 00:00:1731865107.201995  508394 subchannel.cc:806] subchannel 0x56051fd2b280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56051fcad880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56051fe6ccf0, grpc.internal.client_channel_call_destination=0x7f178ee83390, grpc.internal.event_engine=0x56051f9277d0, grpc.internal.security_connector=0x56051fc93aa0, grpc.internal.subchannel_pool=0x56051fe9e4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56051fea1890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:27.201410763+01:00"}), backing off for 1000 ms
00:17:40.895  {}
00:17:40.895    18:38:27 sma.sma_crypto -- sma/crypto.sh@255 -- # create_device 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:40.895    18:38:27 sma.sma_crypto -- sma/crypto.sh@255 -- # jq -r .handle
00:17:40.895    18:38:27 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:40.895     18:38:27 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:40.895     18:38:27 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:40.895     18:38:27 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:40.895      18:38:27 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:40.895       18:38:27 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:40.895       18:38:27 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:40.895     18:38:27 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:40.895  "nvmf": {
00:17:40.895    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:40.895    "discovery": {
00:17:40.895      "discovery_endpoints": [
00:17:40.895        {
00:17:40.895          "trtype": "tcp",
00:17:40.896          "traddr": "127.0.0.1",
00:17:40.896          "trsvcid": "8009"
00:17:40.896        }
00:17:40.896      ]
00:17:40.896    }
00:17:40.896  }'
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:40.896      18:38:27 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:40.896      18:38:27 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:40.896      18:38:27 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:40.896      18:38:27 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:40.896      18:38:27 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:17:40.896       18:38:27 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:40.896      18:38:27 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:40.896    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:40.896  }'
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:40.896     18:38:27 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:41.153  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:41.153  I0000 00:00:1731865107.487929  508469 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:41.153  I0000 00:00:1731865107.489464  508469 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:41.153  I0000 00:00:1731865107.490965  508564 subchannel.cc:806] subchannel 0x55874387c280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5587439bdcf0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558743919da0, grpc.internal.client_channel_call_destination=0x7f47f7ca5390, grpc.internal.event_engine=0x558743592660, grpc.internal.security_connector=0x55874387fd60, grpc.internal.subchannel_pool=0x55874387fd10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5587439f2890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:27.490476904+01:00"}), backing off for 1000 ms
00:17:42.089  [2024-11-17 18:38:28.611584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:42.347   18:38:28 sma.sma_crypto -- sma/crypto.sh@255 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:42.347   18:38:28 sma.sma_crypto -- sma/crypto.sh@256 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=03ac9f90-b086-453e-b44a-49ba18c11aec ns ns_bdev
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:42.348    "nsid": 1,
00:17:42.348    "bdev_name": "fd06a5d2-a48c-46b0-bd3f-ccdcdbd7b76d",
00:17:42.348    "name": "fd06a5d2-a48c-46b0-bd3f-ccdcdbd7b76d",
00:17:42.348    "nguid": "03AC9F90B086453EB44A49BA18C11AEC",
00:17:42.348    "uuid": "03ac9f90-b086-453e-b44a-49ba18c11aec"
00:17:42.348  }'
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=fd06a5d2-a48c-46b0-bd3f-ccdcdbd7b76d
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b fd06a5d2-a48c-46b0-bd3f-ccdcdbd7b76d
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:42.348    18:38:28 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 03ac9f90-b086-453e-b44a-49ba18c11aec == \0\3\a\c\9\f\9\0\-\b\0\8\6\-\4\5\3\e\-\b\4\4\a\-\4\9\b\a\1\8\c\1\1\a\e\c ]]
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:42.348    18:38:28 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=03AC9F90-B086-453E-B44A-49BA18C11AEC
00:17:42.348    18:38:28 sma.sma_crypto -- sma/common.sh@41 -- # echo 03AC9F90B086453EB44A49BA18C11AEC
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 03AC9F90B086453EB44A49BA18C11AEC == \0\3\A\C\9\F\9\0\B\0\8\6\4\5\3\E\B\4\4\A\4\9\B\A\1\8\C\1\1\A\E\C ]]
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:42.348   18:38:28 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:42.348    18:38:28 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:42.348    18:38:28 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:42.606  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:42.606  I0000 00:00:1731865109.141696  508816 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:42.606  I0000 00:00:1731865109.143330  508816 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:42.606  I0000 00:00:1731865109.144677  508819 subchannel.cc:806] subchannel 0x56465876f280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5646586f1880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5646588b0cf0, grpc.internal.client_channel_call_destination=0x7fd1524bc390, grpc.internal.event_engine=0x564658580e40, grpc.internal.security_connector=0x5646586d7aa0, grpc.internal.subchannel_pool=0x5646588e24f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5646588e5890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:29.144186235+01:00"}), backing off for 1000 ms
00:17:42.864  {}
00:17:42.864   18:38:29 sma.sma_crypto -- sma/crypto.sh@259 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:42.864   18:38:29 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:42.864  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:42.865  I0000 00:00:1731865109.422693  508839 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:42.865  I0000 00:00:1731865109.424517  508839 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:42.865  I0000 00:00:1731865109.425787  508844 subchannel.cc:806] subchannel 0x55da88039280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55da87fbb880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55da8817acf0, grpc.internal.client_channel_call_destination=0x7f8838482390, grpc.internal.event_engine=0x55da87c357d0, grpc.internal.security_connector=0x55da87fa1aa0, grpc.internal.subchannel_pool=0x55da881ac4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55da881af890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:29.4253601+01:00"}), backing off for 1000 ms
00:17:43.122  {}
00:17:43.122   18:38:29 sma.sma_crypto -- sma/crypto.sh@263 -- # NOT create_device 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:43.122   18:38:29 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:43.122   18:38:29 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:43.122   18:38:29 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=create_device
00:17:43.122   18:38:29 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.122    18:38:29 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t create_device
00:17:43.123   18:38:29 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.123   18:38:29 sma.sma_crypto -- common/autotest_common.sh@655 -- # create_device 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:43.123   18:38:29 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec 8 1234567890abcdef1234567890abcdef
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:43.123     18:38:29 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:43.123      18:38:29 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:43.123      18:38:29 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:43.123  "nvmf": {
00:17:43.123    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:43.123    "discovery": {
00:17:43.123      "discovery_endpoints": [
00:17:43.123        {
00:17:43.123          "trtype": "tcp",
00:17:43.123          "traddr": "127.0.0.1",
00:17:43.123          "trsvcid": "8009"
00:17:43.123        }
00:17:43.123      ]
00:17:43.123    }
00:17:43.123  }'
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:43.123     18:38:29 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:43.123     18:38:29 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:43.123     18:38:29 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:43.123     18:38:29 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:43.123     18:38:29 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:43.123      18:38:29 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:43.123     18:38:29 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:43.123    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:43.123  }'
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:43.123    18:38:29 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:43.380  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:43.380  I0000 00:00:1731865109.745786  508867 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:43.380  I0000 00:00:1731865109.747730  508867 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:43.381  I0000 00:00:1731865109.749316  509075 subchannel.cc:806] subchannel 0x55b9c826f280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b9c83b0cf0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b9c830cda0, grpc.internal.client_channel_call_destination=0x7fe332b0b390, grpc.internal.event_engine=0x55b9c7f85660, grpc.internal.security_connector=0x55b9c8272d60, grpc.internal.subchannel_pool=0x55b9c8272d10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b9c83e5890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:29.74873163+01:00"}), backing off for 1000 ms
00:17:44.314  Traceback (most recent call last):
00:17:44.314    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:44.314      main(sys.argv[1:])
00:17:44.314    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:44.314      result = client.call(request['method'], request.get('params', {}))
00:17:44.314               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:44.314    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:44.314      response = func(request=json_format.ParseDict(params, input()))
00:17:44.314                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:44.314    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:44.314      return _end_unary_response_blocking(state, call, False, None)
00:17:44.314             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:44.314    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:44.314      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:44.314      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:44.314  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:44.314  	status = StatusCode.INVALID_ARGUMENT
00:17:44.314  	details = "Invalid volume crypto configuration: bad cipher"
00:17:44.314  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-11-17T18:38:30.860009685+01:00"}"
00:17:44.314  >
00:17:44.573   18:38:30 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:44.573   18:38:30 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:44.573   18:38:30 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:44.573   18:38:30 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:44.573    18:38:30 sma.sma_crypto -- sma/crypto.sh@264 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:44.573    18:38:30 sma.sma_crypto -- sma/crypto.sh@264 -- # jq -r '. | length'
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:44.573   18:38:30 sma.sma_crypto -- sma/crypto.sh@264 -- # [[ 0 -eq 0 ]]
00:17:44.573    18:38:30 sma.sma_crypto -- sma/crypto.sh@265 -- # rpc_cmd bdev_get_bdevs
00:17:44.573    18:38:30 sma.sma_crypto -- sma/crypto.sh@265 -- # jq -r length
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:44.573   18:38:30 sma.sma_crypto -- sma/crypto.sh@265 -- # [[ 0 -eq 0 ]]
00:17:44.573    18:38:30 sma.sma_crypto -- sma/crypto.sh@266 -- # rpc_cmd nvmf_get_subsystems
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:44.573    18:38:30 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:44.573    18:38:30 sma.sma_crypto -- sma/crypto.sh@266 -- # jq -r '[.[] | select(.nqn == "nqn.2016-06.io.spdk:cnode0")] | length'
00:17:44.573    18:38:31 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:44.573   18:38:31 sma.sma_crypto -- sma/crypto.sh@266 -- # [[ 0 -eq 0 ]]
00:17:44.573   18:38:31 sma.sma_crypto -- sma/crypto.sh@269 -- # killprocess 506191
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 506191 ']'
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 506191
00:17:44.573    18:38:31 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:44.573    18:38:31 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506191
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506191'
00:17:44.573  killing process with pid 506191
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 506191
00:17:44.573   18:38:31 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 506191
00:17:44.573   18:38:31 sma.sma_crypto -- sma/crypto.sh@270 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:44.573   18:38:31 sma.sma_crypto -- sma/crypto.sh@278 -- # smapid=509322
00:17:44.573    18:38:31 sma.sma_crypto -- sma/crypto.sh@270 -- # cat
00:17:44.573   18:38:31 sma.sma_crypto -- sma/crypto.sh@280 -- # sma_waitforlisten
00:17:44.573   18:38:31 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:44.573   18:38:31 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:17:44.573   18:38:31 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:17:44.573   18:38:31 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:44.573   18:38:31 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:44.573   18:38:31 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:17:44.832  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:44.832  I0000 00:00:1731865111.342934  509322 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:45.767   18:38:32 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:17:45.767   18:38:32 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:45.767   18:38:32 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:45.767   18:38:32 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:17:45.767    18:38:32 sma.sma_crypto -- sma/crypto.sh@281 -- # create_device
00:17:45.767    18:38:32 sma.sma_crypto -- sma/crypto.sh@281 -- # jq -r .handle
00:17:45.767    18:38:32 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:46.026  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:46.026  I0000 00:00:1731865112.360844  509555 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:46.026  I0000 00:00:1731865112.362555  509555 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:46.026  I0000 00:00:1731865112.363929  509556 subchannel.cc:806] subchannel 0x5614a7478280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5614a73fa880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5614a75b9cf0, grpc.internal.client_channel_call_destination=0x7f1f5cab7390, grpc.internal.event_engine=0x5614a7289e40, grpc.internal.security_connector=0x5614a73e0aa0, grpc.internal.subchannel_pool=0x5614a75eb4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5614a75ee890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:32.36346428+01:00"}), backing off for 1000 ms
00:17:46.026  [2024-11-17 18:38:32.384093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:46.026   18:38:32 sma.sma_crypto -- sma/crypto.sh@281 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:46.026   18:38:32 sma.sma_crypto -- sma/crypto.sh@283 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:46.026   18:38:32 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:46.026   18:38:32 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:46.026   18:38:32 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:46.026   18:38:32 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:46.026    18:38:32 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:46.026   18:38:32 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:46.026   18:38:32 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:46.026   18:38:32 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:46.026   18:38:32 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:46.026   18:38:32 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 03ac9f90-b086-453e-b44a-49ba18c11aec AES_CBC 1234567890abcdef1234567890abcdef
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=03ac9f90-b086-453e-b44a-49ba18c11aec cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:46.026     18:38:32 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:46.026      18:38:32 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 03ac9f90-b086-453e-b44a-49ba18c11aec
00:17:46.026      18:38:32 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "A6yfkLCGRT60Skm6GMEa7A==",
00:17:46.026  "nvmf": {
00:17:46.026    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:46.026    "discovery": {
00:17:46.026      "discovery_endpoints": [
00:17:46.026        {
00:17:46.026          "trtype": "tcp",
00:17:46.026          "traddr": "127.0.0.1",
00:17:46.026          "trsvcid": "8009"
00:17:46.026        }
00:17:46.026      ]
00:17:46.026    }
00:17:46.026  }'
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:46.026     18:38:32 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:46.026     18:38:32 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:46.026     18:38:32 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:46.026     18:38:32 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:46.026     18:38:32 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:46.026      18:38:32 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:46.026     18:38:32 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:46.026    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:46.026  }'
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:46.026    18:38:32 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:46.285  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:46.285  I0000 00:00:1731865112.714932  509578 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:46.285  I0000 00:00:1731865112.716426  509578 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:46.285  I0000 00:00:1731865112.717816  509593 subchannel.cc:806] subchannel 0x55f3a9660280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f3a95e2880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f3a97a1cf0, grpc.internal.client_channel_call_destination=0x7f831ac85390, grpc.internal.event_engine=0x55f3a925c7d0, grpc.internal.security_connector=0x55f3a95bea50, grpc.internal.subchannel_pool=0x55f3a97d34f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f3a97d6890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:32.717390305+01:00"}), backing off for 1000 ms
00:17:47.663  Traceback (most recent call last):
00:17:47.663    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:47.663      main(sys.argv[1:])
00:17:47.663    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:47.663      result = client.call(request['method'], request.get('params', {}))
00:17:47.663               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:47.663    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:47.663      response = func(request=json_format.ParseDict(params, input()))
00:17:47.663                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:47.663    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:47.663      return _end_unary_response_blocking(state, call, False, None)
00:17:47.663             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:47.663    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:47.663      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:47.663      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:47.663  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:47.663  	status = StatusCode.INVALID_ARGUMENT
00:17:47.663  	details = "Crypto is disabled"
00:17:47.663  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:33.823567034+01:00", grpc_status:3, grpc_message:"Crypto is disabled"}"
00:17:47.663  >
00:17:47.663   18:38:33 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:47.663   18:38:33 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:47.663   18:38:33 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:47.663   18:38:33 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:47.663    18:38:33 sma.sma_crypto -- sma/crypto.sh@284 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:47.663    18:38:33 sma.sma_crypto -- sma/crypto.sh@284 -- # jq -r '. | length'
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:47.663   18:38:33 sma.sma_crypto -- sma/crypto.sh@284 -- # [[ 0 -eq 0 ]]
00:17:47.663    18:38:33 sma.sma_crypto -- sma/crypto.sh@285 -- # rpc_cmd bdev_get_bdevs
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:47.663    18:38:33 sma.sma_crypto -- sma/crypto.sh@285 -- # jq -r length
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:47.663   18:38:33 sma.sma_crypto -- sma/crypto.sh@285 -- # [[ 0 -eq 0 ]]
00:17:47.663   18:38:33 sma.sma_crypto -- sma/crypto.sh@287 -- # cleanup
00:17:47.663   18:38:33 sma.sma_crypto -- sma/crypto.sh@22 -- # killprocess 509322
00:17:47.663   18:38:33 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 509322 ']'
00:17:47.663   18:38:33 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 509322
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:47.663   18:38:33 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:47.663    18:38:33 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509322
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509322'
00:17:47.663  killing process with pid 509322
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 509322
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 509322
00:17:47.663   18:38:34 sma.sma_crypto -- sma/crypto.sh@23 -- # killprocess 505775
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 505775 ']'
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 505775
00:17:47.663    18:38:34 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:47.663    18:38:34 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 505775
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 505775'
00:17:47.663  killing process with pid 505775
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 505775
00:17:47.663   18:38:34 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 505775
00:17:48.231   18:38:34 sma.sma_crypto -- sma/crypto.sh@24 -- # killprocess 506190
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 506190 ']'
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 506190
00:17:48.231    18:38:34 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:48.231    18:38:34 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506190
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506190'
00:17:48.231  killing process with pid 506190
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 506190
00:17:48.231   18:38:34 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 506190
00:17:48.490   18:38:34 sma.sma_crypto -- sma/crypto.sh@288 -- # trap - SIGINT SIGTERM EXIT
00:17:48.490  
00:17:48.490  real	0m20.152s
00:17:48.490  user	0m44.857s
00:17:48.490  sys	0m2.713s
00:17:48.490   18:38:34 sma.sma_crypto -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:48.490   18:38:34 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:48.491  ************************************
00:17:48.491  END TEST sma_crypto
00:17:48.491  ************************************
00:17:48.491   18:38:34 sma -- sma/sma.sh@17 -- # run_test sma_qos /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:17:48.491   18:38:34 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:48.491   18:38:34 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:48.491   18:38:34 sma -- common/autotest_common.sh@10 -- # set +x
00:17:48.491  ************************************
00:17:48.491  START TEST sma_qos
00:17:48.491  ************************************
00:17:48.491   18:38:34 sma.sma_qos -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:17:48.491  * Looking for test storage...
00:17:48.491  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:48.491    18:38:35 sma.sma_qos -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:17:48.491     18:38:35 sma.sma_qos -- common/autotest_common.sh@1693 -- # lcov --version
00:17:48.491     18:38:35 sma.sma_qos -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:17:48.750    18:38:35 sma.sma_qos -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@336 -- # IFS=.-:
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@336 -- # read -ra ver1
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@337 -- # IFS=.-:
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@337 -- # read -ra ver2
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@338 -- # local 'op=<'
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@340 -- # ver1_l=2
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@341 -- # ver2_l=1
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@344 -- # case "$op" in
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@345 -- # : 1
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@365 -- # decimal 1
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@353 -- # local d=1
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@355 -- # echo 1
00:17:48.750    18:38:35 sma.sma_qos -- scripts/common.sh@365 -- # ver1[v]=1
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@366 -- # decimal 2
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@353 -- # local d=2
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:48.750     18:38:35 sma.sma_qos -- scripts/common.sh@355 -- # echo 2
00:17:48.751    18:38:35 sma.sma_qos -- scripts/common.sh@366 -- # ver2[v]=2
00:17:48.751    18:38:35 sma.sma_qos -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:48.751    18:38:35 sma.sma_qos -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:48.751    18:38:35 sma.sma_qos -- scripts/common.sh@368 -- # return 0
00:17:48.751    18:38:35 sma.sma_qos -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:48.751    18:38:35 sma.sma_qos -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:17:48.751  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:48.751  		--rc genhtml_branch_coverage=1
00:17:48.751  		--rc genhtml_function_coverage=1
00:17:48.751  		--rc genhtml_legend=1
00:17:48.751  		--rc geninfo_all_blocks=1
00:17:48.751  		--rc geninfo_unexecuted_blocks=1
00:17:48.751  		
00:17:48.751  		'
00:17:48.751    18:38:35 sma.sma_qos -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:17:48.751  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:48.751  		--rc genhtml_branch_coverage=1
00:17:48.751  		--rc genhtml_function_coverage=1
00:17:48.751  		--rc genhtml_legend=1
00:17:48.751  		--rc geninfo_all_blocks=1
00:17:48.751  		--rc geninfo_unexecuted_blocks=1
00:17:48.751  		
00:17:48.751  		'
00:17:48.751    18:38:35 sma.sma_qos -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:17:48.751  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:48.751  		--rc genhtml_branch_coverage=1
00:17:48.751  		--rc genhtml_function_coverage=1
00:17:48.751  		--rc genhtml_legend=1
00:17:48.751  		--rc geninfo_all_blocks=1
00:17:48.751  		--rc geninfo_unexecuted_blocks=1
00:17:48.751  		
00:17:48.751  		'
00:17:48.751    18:38:35 sma.sma_qos -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:17:48.751  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:48.751  		--rc genhtml_branch_coverage=1
00:17:48.751  		--rc genhtml_function_coverage=1
00:17:48.751  		--rc genhtml_legend=1
00:17:48.751  		--rc geninfo_all_blocks=1
00:17:48.751  		--rc geninfo_unexecuted_blocks=1
00:17:48.751  		
00:17:48.751  		'
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@13 -- # smac=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@15 -- # device_nvmf_tcp=3
00:17:48.751    18:38:35 sma.sma_qos -- sma/qos.sh@16 -- # printf %u -1
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@16 -- # limit_reserved=18446744073709551615
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@42 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@45 -- # tgtpid=510109
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@55 -- # smapid=510110
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@57 -- # sma_waitforlisten
00:17:48.751   18:38:35 sma.sma_qos -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:48.751   18:38:35 sma.sma_qos -- sma/qos.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:48.751   18:38:35 sma.sma_qos -- sma/common.sh@8 -- # local sma_port=8080
00:17:48.751   18:38:35 sma.sma_qos -- sma/common.sh@10 -- # (( i = 0 ))
00:17:48.751    18:38:35 sma.sma_qos -- sma/qos.sh@47 -- # cat
00:17:48.751   18:38:35 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:17:48.751   18:38:35 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:48.751   18:38:35 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:17:48.751  [2024-11-17 18:38:35.207746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization...
00:17:48.751  [2024-11-17 18:38:35.207926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510109 ]
00:17:48.751  EAL: No free 2048 kB hugepages reported on node 1
00:17:48.751  [2024-11-17 18:38:35.316532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:49.010  [2024-11-17 18:38:35.354241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:49.577   18:38:36 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:17:49.577   18:38:36 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:17:49.577   18:38:36 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:49.836   18:38:36 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:17:49.836  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:49.836  I0000 00:00:1731865116.344202  510110 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:49.836  [2024-11-17 18:38:36.355931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:50.773   18:38:37 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:17:50.773   18:38:37 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:17:50.773   18:38:37 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:50.773   18:38:37 sma.sma_qos -- sma/common.sh@12 -- # return 0
00:17:50.773   18:38:37 sma.sma_qos -- sma/qos.sh@60 -- # rpc_cmd bdev_null_create null0 100 4096
00:17:50.773   18:38:37 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.773   18:38:37 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:50.773  null0
00:17:50.773   18:38:37 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.773    18:38:37 sma.sma_qos -- sma/qos.sh@61 -- # jq -r '.[].uuid'
00:17:50.773    18:38:37 sma.sma_qos -- sma/qos.sh@61 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:50.773    18:38:37 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.773    18:38:37 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:50.773    18:38:37 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.773   18:38:37 sma.sma_qos -- sma/qos.sh@61 -- # uuid=b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:50.773    18:38:37 sma.sma_qos -- sma/qos.sh@62 -- # create_device b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:50.773    18:38:37 sma.sma_qos -- sma/qos.sh@62 -- # jq -r .handle
00:17:50.773    18:38:37 sma.sma_qos -- sma/qos.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:50.773     18:38:37 sma.sma_qos -- sma/qos.sh@24 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:50.773     18:38:37 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:51.032  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.032  I0000 00:00:1731865117.537866  510557 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.032  I0000 00:00:1731865117.539531  510557 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.032  I0000 00:00:1731865117.540898  510560 subchannel.cc:806] subchannel 0x560677043280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560676fc5880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560677184cf0, grpc.internal.client_channel_call_destination=0x7f7ca9da5390, grpc.internal.event_engine=0x560676c3f7d0, grpc.internal.security_connector=0x560676fa1a50, grpc.internal.subchannel_pool=0x5606771b64f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5606771b9890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:37.540511667+01:00"}), backing off for 1000 ms
00:17:51.032  [2024-11-17 18:38:37.567567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:51.032   18:38:37 sma.sma_qos -- sma/qos.sh@62 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:51.032    18:38:37 sma.sma_qos -- sma/qos.sh@65 -- # get_qos_caps 3
00:17:51.032   18:38:37 sma.sma_qos -- sma/qos.sh@65 -- # diff /dev/fd/62 /dev/fd/61
00:17:51.032    18:38:37 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:17:51.032    18:38:37 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:17:51.032    18:38:37 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:17:51.032     18:38:37 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:51.032    18:38:37 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:17:51.032    18:38:37 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:17:51.291  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.291  I0000 00:00:1731865117.797715  510589 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.291  I0000 00:00:1731865117.799280  510589 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.291  I0000 00:00:1731865117.800480  510590 subchannel.cc:806] subchannel 0x562565f9aeb0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562565fa5de0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562565d820b0, grpc.internal.client_channel_call_destination=0x7f76bf941390, grpc.internal.event_engine=0x562565f807a0, grpc.internal.security_connector=0x562565f7c0c0, grpc.internal.subchannel_pool=0x562565f7bf20, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562565f75d10, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:37.800053112+01:00"}), backing off for 1000 ms
00:17:51.291   18:38:37 sma.sma_qos -- sma/qos.sh@79 -- # NOT get_qos_caps 1234
00:17:51.291   18:38:37 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:51.291   18:38:37 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg get_qos_caps 1234
00:17:51.291   18:38:37 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=get_qos_caps
00:17:51.291   18:38:37 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.291    18:38:37 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t get_qos_caps
00:17:51.291   18:38:37 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.291   18:38:37 sma.sma_qos -- common/autotest_common.sh@655 -- # get_qos_caps 1234
00:17:51.291   18:38:37 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:17:51.291    18:38:37 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:51.291   18:38:37 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:17:51.291   18:38:37 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:17:51.549  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.549  I0000 00:00:1731865118.023734  510619 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.549  I0000 00:00:1731865118.025578  510619 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.549  I0000 00:00:1731865118.027155  510815 subchannel.cc:806] subchannel 0x555640cdeeb0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555640ce9de0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555640ac60b0, grpc.internal.client_channel_call_destination=0x7f345041e390, grpc.internal.event_engine=0x555640cc47a0, grpc.internal.security_connector=0x555640cc00c0, grpc.internal.subchannel_pool=0x555640cbff20, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555640cb9d10, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:38.02645866+01:00"}), backing off for 1000 ms
00:17:51.549  Traceback (most recent call last):
00:17:51.549    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 74, in <module>
00:17:51.549      main(sys.argv[1:])
00:17:51.550    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 69, in main
00:17:51.550      result = client.call(request['method'], request.get('params', {}))
00:17:51.550               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.550    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 43, in call
00:17:51.550      response = func(request=json_format.ParseDict(params, input()))
00:17:51.550                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.550    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:51.550      return _end_unary_response_blocking(state, call, False, None)
00:17:51.550             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.550    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:51.550      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:51.550      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.550  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:51.550  	status = StatusCode.INVALID_ARGUMENT
00:17:51.550  	details = "Invalid device type"
00:17:51.550  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid device type", grpc_status:3, created_time:"2024-11-17T18:38:38.028513121+01:00"}"
00:17:51.550  >
00:17:51.550   18:38:38 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:51.550   18:38:38 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:51.550   18:38:38 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:51.550   18:38:38 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:51.550   18:38:38 sma.sma_qos -- sma/qos.sh@82 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:51.550    18:38:38 sma.sma_qos -- sma/qos.sh@82 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:51.550    18:38:38 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:51.809  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.809  I0000 00:00:1731865118.313001  510836 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.809  I0000 00:00:1731865118.314711  510836 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.809  I0000 00:00:1731865118.316101  510842 subchannel.cc:806] subchannel 0x55ae4b484280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ae4b406880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ae4b5c5cf0, grpc.internal.client_channel_call_destination=0x7fa5b310f390, grpc.internal.event_engine=0x55ae4b0807d0, grpc.internal.security_connector=0x55ae4b3e2a50, grpc.internal.subchannel_pool=0x55ae4b5f74f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ae4b5fa890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:38.315612513+01:00"}), backing off for 1000 ms
00:17:51.809  {}
00:17:51.809   18:38:38 sma.sma_qos -- sma/qos.sh@94 -- # diff /dev/fd/62 /dev/fd/61
00:17:51.809    18:38:38 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys
00:17:51.809    18:38:38 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:51.809    18:38:38 sma.sma_qos -- sma/qos.sh@94 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:51.809    18:38:38 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.809    18:38:38 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:51.809    18:38:38 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:52.067   18:38:38 sma.sma_qos -- sma/qos.sh@106 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.067    18:38:38 sma.sma_qos -- sma/qos.sh@106 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:52.067    18:38:38 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:52.067  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:52.067  I0000 00:00:1731865118.636371  510868 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:52.067  I0000 00:00:1731865118.638173  510868 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:52.067  I0000 00:00:1731865118.639558  510874 subchannel.cc:806] subchannel 0x55ece09b7280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ece0939880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ece0af8cf0, grpc.internal.client_channel_call_destination=0x7f2cbcc12390, grpc.internal.event_engine=0x55ece05b37d0, grpc.internal.security_connector=0x55ece0915a50, grpc.internal.subchannel_pool=0x55ece0b2a4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ece0b2d890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:38.639076282+01:00"}), backing off for 1000 ms
00:17:52.327  {}
00:17:52.327    18:38:38 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys
00:17:52.327    18:38:38 sma.sma_qos -- sma/qos.sh@119 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:52.327   18:38:38 sma.sma_qos -- sma/qos.sh@119 -- # diff /dev/fd/62 /dev/fd/61
00:17:52.327    18:38:38 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:52.327    18:38:38 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:52.327    18:38:38 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:52.327    18:38:38 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:52.327   18:38:38 sma.sma_qos -- sma/qos.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.327    18:38:38 sma.sma_qos -- sma/qos.sh@131 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:52.327    18:38:38 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:52.585  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:52.585  I0000 00:00:1731865118.950080  510900 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:52.585  I0000 00:00:1731865118.951534  510900 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:52.585  I0000 00:00:1731865118.952693  510903 subchannel.cc:806] subchannel 0x5636cb0c9280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5636cb04b880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5636cb20acf0, grpc.internal.client_channel_call_destination=0x7f4236cc8390, grpc.internal.event_engine=0x5636cacc57d0, grpc.internal.security_connector=0x5636cb027a50, grpc.internal.subchannel_pool=0x5636cb23c4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5636cb23f890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:38.952324041+01:00"}), backing off for 1000 ms
00:17:52.585  {}
00:17:52.585    18:38:38 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys
00:17:52.585    18:38:38 sma.sma_qos -- sma/qos.sh@145 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:52.585   18:38:38 sma.sma_qos -- sma/qos.sh@145 -- # diff /dev/fd/62 /dev/fd/61
00:17:52.585    18:38:38 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:52.585    18:38:38 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:52.585    18:38:38 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:52.585    18:38:39 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:52.585   18:38:39 sma.sma_qos -- sma/qos.sh@157 -- # unsupported_max_limits=(rd_iops wr_iops)
00:17:52.585   18:38:39 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:17:52.585   18:38:39 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.585    18:38:39 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:52.585    18:38:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.586    18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.586    18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:52.586   18:38:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.843  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:52.843  I0000 00:00:1731865119.283273  510988 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:52.843  I0000 00:00:1731865119.285027  510988 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:52.843  I0000 00:00:1731865119.286447  511134 subchannel.cc:806] subchannel 0x56478510e280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x564785090880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56478524fcf0, grpc.internal.client_channel_call_destination=0x7f185ffa1390, grpc.internal.event_engine=0x564784d0a7d0, grpc.internal.security_connector=0x564785137120, grpc.internal.subchannel_pool=0x5647852814f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x564785284890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:39.285947825+01:00"}), backing off for 999 ms
00:17:52.843  Traceback (most recent call last):
00:17:52.843    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:52.843      main(sys.argv[1:])
00:17:52.843    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:52.843      result = client.call(request['method'], request.get('params', {}))
00:17:52.843               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:52.843    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:52.843      response = func(request=json_format.ParseDict(params, input()))
00:17:52.843                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:52.843    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:52.843      return _end_unary_response_blocking(state, call, False, None)
00:17:52.843             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:52.843    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:52.843      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:52.843      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:52.843  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:52.843  	status = StatusCode.INVALID_ARGUMENT
00:17:52.843  	details = "Unsupported QoS limit: maximum.rd_iops"
00:17:52.843  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Unsupported QoS limit: maximum.rd_iops", grpc_status:3, created_time:"2024-11-17T18:38:39.304752431+01:00"}"
00:17:52.843  >
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:52.843   18:38:39 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:17:52.843   18:38:39 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.843    18:38:39 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:52.843    18:38:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.843    18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.843    18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:52.843   18:38:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.101  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:53.101  I0000 00:00:1731865119.548248  511159 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:53.101  I0000 00:00:1731865119.549933  511159 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:53.101  I0000 00:00:1731865119.551176  511160 subchannel.cc:806] subchannel 0x55a82237e280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a822300880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a8224bfcf0, grpc.internal.client_channel_call_destination=0x7f9d1de0a390, grpc.internal.event_engine=0x55a821f7a7d0, grpc.internal.security_connector=0x55a8223a7120, grpc.internal.subchannel_pool=0x55a8224f14f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a8224f4890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:39.550741279+01:00"}), backing off for 1000 ms
00:17:53.101  Traceback (most recent call last):
00:17:53.102    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:53.102      main(sys.argv[1:])
00:17:53.102    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:53.102      result = client.call(request['method'], request.get('params', {}))
00:17:53.102               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.102    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:53.102      response = func(request=json_format.ParseDict(params, input()))
00:17:53.102                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.102    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:53.102      return _end_unary_response_blocking(state, call, False, None)
00:17:53.102             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.102    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:53.102      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:53.102      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.102  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:53.102  	status = StatusCode.INVALID_ARGUMENT
00:17:53.102  	details = "Unsupported QoS limit: maximum.wr_iops"
00:17:53.102  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Unsupported QoS limit: maximum.wr_iops", grpc_status:3, created_time:"2024-11-17T18:38:39.56551825+01:00"}"
00:17:53.102  >
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:53.102   18:38:39 sma.sma_qos -- sma/qos.sh@178 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.102    18:38:39 sma.sma_qos -- sma/qos.sh@178 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:53.102    18:38:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.102    18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.102    18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:53.102   18:38:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.361  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:53.361  I0000 00:00:1731865119.831707  511188 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:53.361  I0000 00:00:1731865119.833537  511188 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:53.361  I0000 00:00:1731865119.834971  511189 subchannel.cc:806] subchannel 0x559533bab280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559533b2d880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559533ceccf0, grpc.internal.client_channel_call_destination=0x7f2e1c893390, grpc.internal.event_engine=0x5595337a77d0, grpc.internal.security_connector=0x559533b09a50, grpc.internal.subchannel_pool=0x559533d1e4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559533d21890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:39.834495874+01:00"}), backing off for 1000 ms
00:17:53.361  [2024-11-17 18:38:39.845712] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0-invalid' does not exist
00:17:53.361  Traceback (most recent call last):
00:17:53.362    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:53.362      main(sys.argv[1:])
00:17:53.362    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:53.362      result = client.call(request['method'], request.get('params', {}))
00:17:53.362               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.362    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:53.362      response = func(request=json_format.ParseDict(params, input()))
00:17:53.362                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.362    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:53.362      return _end_unary_response_blocking(state, call, False, None)
00:17:53.362             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.362    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:53.362      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:53.362      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.362  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:53.362  	status = StatusCode.NOT_FOUND
00:17:53.362  	details = "No device associated with device_handle could be found"
00:17:53.362  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"No device associated with device_handle could be found", grpc_status:5, created_time:"2024-11-17T18:38:39.850198672+01:00"}"
00:17:53.362  >
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:53.362   18:38:39 sma.sma_qos -- sma/qos.sh@191 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.362     18:38:39 sma.sma_qos -- sma/qos.sh@191 -- # uuidgen
00:17:53.362    18:38:39 sma.sma_qos -- sma/qos.sh@191 -- # uuid2base64 d8674232-d8cd-4065-9fc6-fdc1c5928795
00:17:53.362    18:38:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.362    18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.362    18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:53.362   18:38:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.620  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:53.620  I0000 00:00:1731865120.120261  511215 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:53.620  I0000 00:00:1731865120.122289  511215 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:53.620  I0000 00:00:1731865120.123941  511222 subchannel.cc:806] subchannel 0x5654e0b84280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5654e0b06880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5654e0cc5cf0, grpc.internal.client_channel_call_destination=0x7fd52c2ab390, grpc.internal.event_engine=0x5654e07807d0, grpc.internal.security_connector=0x5654e0ae2a50, grpc.internal.subchannel_pool=0x5654e0cf74f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5654e0cfa890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:40.123277276+01:00"}), backing off for 1000 ms
00:17:53.620  [2024-11-17 18:38:40.130536] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: d8674232-d8cd-4065-9fc6-fdc1c5928795
00:17:53.620  Traceback (most recent call last):
00:17:53.620    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:53.620      main(sys.argv[1:])
00:17:53.620    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:53.620      result = client.call(request['method'], request.get('params', {}))
00:17:53.621               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.621    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:53.621      response = func(request=json_format.ParseDict(params, input()))
00:17:53.621                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.621    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:53.621      return _end_unary_response_blocking(state, call, False, None)
00:17:53.621             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.621    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:53.621      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:53.621      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.621  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:53.621  	status = StatusCode.NOT_FOUND
00:17:53.621  	details = "No volume associated with volume_id could be found"
00:17:53.621  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"No volume associated with volume_id could be found", grpc_status:5, created_time:"2024-11-17T18:38:40.135028625+01:00"}"
00:17:53.621  >
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:53.621   18:38:40 sma.sma_qos -- sma/qos.sh@205 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.621    18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.621    18:38:40 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:53.621   18:38:40 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.879  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:53.879  I0000 00:00:1731865120.383342  511244 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:53.879  I0000 00:00:1731865120.385233  511244 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:53.880  I0000 00:00:1731865120.386785  511392 subchannel.cc:806] subchannel 0x561200c37280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561200bb9880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561200d78cf0, grpc.internal.client_channel_call_destination=0x7fd0d9f31390, grpc.internal.event_engine=0x561200a48e40, grpc.internal.security_connector=0x561200b9faa0, grpc.internal.subchannel_pool=0x561200daa4f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561200dad890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:40.386192753+01:00"}), backing off for 1000 ms
00:17:53.880  Traceback (most recent call last):
00:17:53.880    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:53.880      main(sys.argv[1:])
00:17:53.880    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:53.880      result = client.call(request['method'], request.get('params', {}))
00:17:53.880               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.880    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:53.880      response = func(request=json_format.ParseDict(params, input()))
00:17:53.880                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.880    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:53.880      return _end_unary_response_blocking(state, call, False, None)
00:17:53.880             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.880    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:53.880      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:53.880      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.880  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:53.880  	status = StatusCode.INVALID_ARGUMENT
00:17:53.880  	details = "Invalid volume ID"
00:17:53.880  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume ID", grpc_status:3, created_time:"2024-11-17T18:38:40.388069088+01:00"}"
00:17:53.880  >
00:17:53.880   18:38:40 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:53.880   18:38:40 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:53.880   18:38:40 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:53.880   18:38:40 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:53.880   18:38:40 sma.sma_qos -- sma/qos.sh@217 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:53.880    18:38:40 sma.sma_qos -- sma/qos.sh@217 -- # uuid2base64 b4b610a1-fb40-4f30-856a-b76b81b2197f
00:17:53.880    18:38:40 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:54.139    18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:54.139    18:38:40 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:54.139   18:38:40 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.139  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:54.139  I0000 00:00:1731865120.668556  511465 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:54.139  I0000 00:00:1731865120.670226  511465 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:54.139  I0000 00:00:1731865120.671512  511466 subchannel.cc:806] subchannel 0x5557a1c10280 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5557a1b92880, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5557a1d51cf0, grpc.internal.client_channel_call_destination=0x7f8f34619390, grpc.internal.event_engine=0x5557a180c7d0, grpc.internal.security_connector=0x5557a1b6ea50, grpc.internal.subchannel_pool=0x5557a1d834f0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5557a1d86890, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-17T18:38:40.671023627+01:00"}), backing off for 999 ms
00:17:54.139  Traceback (most recent call last):
00:17:54.139    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:54.139      main(sys.argv[1:])
00:17:54.139    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:54.139      result = client.call(request['method'], request.get('params', {}))
00:17:54.139               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:54.139    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:54.139      response = func(request=json_format.ParseDict(params, input()))
00:17:54.139                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:54.139    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:54.139      return _end_unary_response_blocking(state, call, False, None)
00:17:54.139             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:54.140    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:54.140      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:54.140      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:54.140  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:54.140  	status = StatusCode.NOT_FOUND
00:17:54.140  	details = "Invalid device handle"
00:17:54.140  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-17T18:38:40.672607943+01:00", grpc_status:5, grpc_message:"Invalid device handle"}"
00:17:54.140  >
00:17:54.140   18:38:40 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:54.140   18:38:40 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:54.140   18:38:40 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:54.140   18:38:40 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:54.140   18:38:40 sma.sma_qos -- sma/qos.sh@230 -- # diff /dev/fd/62 /dev/fd/61
00:17:54.140    18:38:40 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:54.140    18:38:40 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys
00:17:54.140    18:38:40 sma.sma_qos -- sma/qos.sh@230 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:54.140    18:38:40 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:54.140    18:38:40 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:54.140    18:38:40 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:54.399   18:38:40 sma.sma_qos -- sma/qos.sh@241 -- # trap - SIGINT SIGTERM EXIT
00:17:54.399   18:38:40 sma.sma_qos -- sma/qos.sh@242 -- # cleanup
00:17:54.399   18:38:40 sma.sma_qos -- sma/qos.sh@19 -- # killprocess 510109
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 510109 ']'
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 510109
00:17:54.399    18:38:40 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:54.399    18:38:40 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510109
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510109'
00:17:54.399  killing process with pid 510109
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 510109
00:17:54.399   18:38:40 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 510109
00:17:54.658   18:38:41 sma.sma_qos -- sma/qos.sh@20 -- # killprocess 510110
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 510110 ']'
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 510110
00:17:54.658    18:38:41 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:54.658    18:38:41 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510110
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=python3
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510110'
00:17:54.658  killing process with pid 510110
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 510110
00:17:54.658   18:38:41 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 510110
00:17:54.917  
00:17:54.917  real	0m6.303s
00:17:54.917  user	0m9.183s
00:17:54.917  sys	0m1.131s
00:17:54.917   18:38:41 sma.sma_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:54.917   18:38:41 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:54.917  ************************************
00:17:54.917  END TEST sma_qos
00:17:54.917  ************************************
00:17:54.917  
00:17:54.917  real	3m18.454s
00:17:54.917  user	5m52.617s
00:17:54.917  sys	0m20.584s
00:17:54.917   18:38:41 sma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:54.917   18:38:41 sma -- common/autotest_common.sh@10 -- # set +x
00:17:54.917  ************************************
00:17:54.917  END TEST sma
00:17:54.917  ************************************
00:17:54.917   18:38:41  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:17:54.917   18:38:41  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:17:54.917   18:38:41  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:17:54.917   18:38:41  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:17:54.917   18:38:41  -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:54.917   18:38:41  -- common/autotest_common.sh@10 -- # set +x
00:17:54.917   18:38:41  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:17:54.917   18:38:41  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:17:54.917   18:38:41  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:17:54.917   18:38:41  -- common/autotest_common.sh@10 -- # set +x
00:17:56.820  INFO: APP EXITING
00:17:56.820  INFO: killing all VMs
00:17:56.820  INFO: killing vhost app
00:17:56.820  INFO: EXIT DONE
00:17:57.755  0000:00:04.7 (8086 6f27): Already using the ioatdma driver
00:17:57.755  0000:00:04.6 (8086 6f26): Already using the ioatdma driver
00:17:57.755  0000:00:04.5 (8086 6f25): Already using the ioatdma driver
00:17:57.755  0000:00:04.4 (8086 6f24): Already using the ioatdma driver
00:17:57.755  0000:00:04.3 (8086 6f23): Already using the ioatdma driver
00:17:57.755  0000:00:04.2 (8086 6f22): Already using the ioatdma driver
00:17:57.755  0000:00:04.1 (8086 6f21): Already using the ioatdma driver
00:17:57.755  0000:00:04.0 (8086 6f20): Already using the ioatdma driver
00:17:57.755  0000:80:04.7 (8086 6f27): Already using the ioatdma driver
00:17:58.015  0000:80:04.6 (8086 6f26): Already using the ioatdma driver
00:17:58.015  0000:80:04.5 (8086 6f25): Already using the ioatdma driver
00:17:58.015  0000:80:04.4 (8086 6f24): Already using the ioatdma driver
00:17:58.015  0000:80:04.3 (8086 6f23): Already using the ioatdma driver
00:17:58.015  0000:80:04.2 (8086 6f22): Already using the ioatdma driver
00:17:58.015  0000:80:04.1 (8086 6f21): Already using the ioatdma driver
00:17:58.015  0000:80:04.0 (8086 6f20): Already using the ioatdma driver
00:17:58.015  0000:0d:00.0 (8086 0a54): Already using the nvme driver
00:17:58.950  Cleaning
00:17:58.950  Removing:    /dev/shm/spdk_tgt_trace.pid380838
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid378336
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid379444
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid380838
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid381432
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid382487
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid382710
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid383799
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid384006
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid384386
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid384854
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid385133
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid385619
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid385903
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid386135
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid386417
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid386816
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid387482
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid390614
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid391046
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid391291
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid391498
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid391951
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid392161
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid392614
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid392824
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid393060
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid393272
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid393508
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid393718
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid394241
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid394471
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid394919
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid396347
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid408550
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid420872
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid435238
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid453936
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid454559
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid460379
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid470852
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid476727
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid483237
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid486923
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid486924
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid486925
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid501937
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid505775
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid506190
00:17:58.950  Removing:    /var/run/dpdk/spdk_pid510109
00:17:58.950  Clean
00:17:58.950   18:38:45  -- common/autotest_common.sh@1453 -- # return 0
00:17:58.950   18:38:45  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:17:58.950   18:38:45  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:58.950   18:38:45  -- common/autotest_common.sh@10 -- # set +x
00:17:58.950   18:38:45  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:17:58.951   18:38:45  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:58.951   18:38:45  -- common/autotest_common.sh@10 -- # set +x
00:17:59.209   18:38:45  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:17:59.209   18:38:45  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log ]]
00:17:59.209   18:38:45  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log
00:17:59.209   18:38:45  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:17:59.209    18:38:45  -- spdk/autotest.sh@398 -- # hostname
00:17:59.209   18:38:45  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -t spdk-wfp-17 -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info
00:17:59.209  geninfo: WARNING: invalid characters removed from testname!
00:18:17.297   18:39:03  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:19.832   18:39:06  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:21.736   18:39:08  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:23.642   18:39:10  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:26.180   18:39:12  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:28.086   18:39:14  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:29.992   18:39:16  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:18:29.992   18:39:16  -- spdk/autorun.sh@1 -- $ timing_finish
00:18:29.992   18:39:16  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt ]]
00:18:29.992   18:39:16  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:18:29.992   18:39:16  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:18:29.992   18:39:16  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:18:29.992  + [[ -n 265937 ]]
00:18:29.992  + sudo kill 265937
00:18:30.002  [Pipeline] }
00:18:30.015  [Pipeline] // stage
00:18:30.020  [Pipeline] }
00:18:30.031  [Pipeline] // timeout
00:18:30.036  [Pipeline] }
00:18:30.047  [Pipeline] // catchError
00:18:30.052  [Pipeline] }
00:18:30.067  [Pipeline] // wrap
00:18:30.073  [Pipeline] }
00:18:30.084  [Pipeline] // catchError
00:18:30.094  [Pipeline] stage
00:18:30.096  [Pipeline] { (Epilogue)
00:18:30.108  [Pipeline] catchError
00:18:30.110  [Pipeline] {
00:18:30.123  [Pipeline] echo
00:18:30.125  Cleanup processes
00:18:30.130  [Pipeline] sh
00:18:30.417  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:30.417  517692 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:30.431  [Pipeline] sh
00:18:30.716  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:30.716  ++ grep -v 'sudo pgrep'
00:18:30.716  ++ awk '{print $1}'
00:18:30.716  + sudo kill -9
00:18:30.716  + true
00:18:30.729  [Pipeline] sh
00:18:31.014  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:18:39.144  [Pipeline] sh
00:18:39.427  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:18:39.427  Artifacts sizes are good
00:18:39.441  [Pipeline] archiveArtifacts
00:18:39.448  Archiving artifacts
00:18:39.573  [Pipeline] sh
00:18:39.856  + sudo chown -R sys_sgci: /var/jenkins/workspace/vfio-user-phy-autotest
00:18:39.869  [Pipeline] cleanWs
00:18:39.878  [WS-CLEANUP] Deleting project workspace...
00:18:39.878  [WS-CLEANUP] Deferred wipeout is used...
00:18:39.885  [WS-CLEANUP] done
00:18:39.887  [Pipeline] }
00:18:39.904  [Pipeline] // catchError
00:18:39.917  [Pipeline] sh
00:18:40.200  + logger -p user.info -t JENKINS-CI
00:18:40.210  [Pipeline] }
00:18:40.224  [Pipeline] // stage
00:18:40.229  [Pipeline] }
00:18:40.244  [Pipeline] // node
00:18:40.249  [Pipeline] End of Pipeline
00:18:40.291  Finished: SUCCESS